van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.
Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge concerning genetic testing and the related consequences for decision-making indicate the societal relevance of such a situated learning approach. What content knowledge do biology teachers need for teaching genetics in the personal health context of genetic testing? This study describes the required content knowledge by exploring the educational practice and clinical genetic practices. Nine experienced teachers and 12 respondents representing the clinical genetic practices (clients, medical professionals, and medical ethicists) were interviewed about the biological concepts and ethical, legal, and social aspects (ELSA) of testing they considered relevant to empowering students as future health care clients. The ELSA suggested by the respondents were complemented by suggestions found in the literature on genetic counselling. The findings revealed that the required teacher knowledge consists of multiple layers that are embedded in specific genetic test situations: on the one hand, the knowledge of concepts represented by the curricular framework and some additional concepts (e.g. multifactorial and polygenic disorder) and, on the other hand, more knowledge of ELSA and generic characteristics of genetic test practice (uncertainty, complexity, probability, and morality). Suggestions regarding how to translate these characteristics, concepts, and ELSA into context-based genetics education are discussed.
Gregorio Sergio E
Full Text Available Abstract Motivation Ontology development and the annotation of biological data using ontologies are time-consuming exercises that currently require input from expert curators. Open, collaborative platforms for biological data annotation enable the wider scientific community to become involved in developing and maintaining such resources. However, this openness raises concerns regarding the quality and correctness of the information added to these knowledge bases. The combination of a collaborative web-based platform with logic-based approaches and Semantic Web technology can be used to address some of these challenges and concerns. Results We have developed the BOWiki, a web-based system that includes a biological core ontology. The core ontology provides background knowledge about biological types and relations. Against this background, an automated reasoner assesses the consistency of new information added to the knowledge base. The system provides a platform for research communities to integrate information and annotate data collaboratively. Availability The BOWiki and supplementary material is available at http://www.bowiki.net/. The source code is available under the GNU GPL from http://onto.eva.mpg.de/trac/BoWiki.
van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.
Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge…
An, Gary C
The greatest challenge facing the biomedical research community is the effective translation of basic mechanistic knowledge into clinically effective therapeutics. This challenge is most evident in attempts to understand and modulate "systems" processes/disorders, such as sepsis, cancer, and wound healing. Formulating an investigatory strategy for these issues requires the recognition that these are dynamic processes. Representation of the dynamic behavior of biological systems can aid in the investigation of complex pathophysiological processes by augmenting existing discovery procedures by integrating disparate information sources and knowledge. This approach is termed Translational Systems Biology. Focusing on the development of computational models capturing the behavior of mechanistic hypotheses provides a tool that bridges gaps in the understanding of a disease process by visualizing "thought experiments" to fill those gaps. Agent-based modeling is a computational method particularly well suited to the translation of mechanistic knowledge into a computational framework. Utilizing agent-based models as a means of dynamic hypothesis representation will be a vital means of describing, communicating, and integrating community-wide knowledge. The transparent representation of hypotheses in this dynamic fashion can form the basis of "knowledge ecologies," where selection between competing hypotheses will apply an evolutionary paradigm to the development of community knowledge.
O'Malley, Maureen A; Powell, Alexander; Davies, Jonathan F; Calvert, Jane
Synthetic biology is an increasingly high-profile area of research that can be understood as encompassing three broad approaches towards the synthesis of living systems: DNA-based device construction, genome-driven cell engineering and protocell creation. Each approach is characterized by different aims, methods and constructs, in addition to a range of positions on intellectual property and regulatory regimes. We identify subtle but important differences between the schools in relation to their treatments of genetic determinism, cellular context and complexity. These distinctions tie into two broader issues that define synthetic biology: the relationships between biology and engineering, and between synthesis and analysis. These themes also illuminate synthetic biology's connections to genetic and other forms of biological engineering, as well as to systems biology. We suggest that all these knowledge-making distinctions in synthetic biology raise fundamental questions about the nature of biological investigation and its relationship to the construction of biological components and systems.
Full Text Available We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM, a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure-the objects that comprise the system, and behavior-how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point-the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model.
Somekh, Judith; Choder, Mordechai; Dori, Dov
We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM), a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure-the objects that comprise the system, and behavior-how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point-the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model.
In the present, we can observe that a new economy is arising. It is an economy based on knowledge and ideas, in which the key factor for prosperity and for creation of the new jobs is the knowledge capitalization. Knowledge capitalization, intellectual capital, obtaining prosperity in the market economy imposes a new terminology, new managerial methods and techniques, new technologies and also new strategies. In other words, knowledge based economy, as a new type of economy; impose a new type...
Ito, Takahiro; Anzai, Daisuke; Jianqing Wang
This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues.
Ju Han Kim
Full Text Available Most methods for large-scale gene expression microarray and RNA-Seq data analysis are designed to determine the lists of genes or gene products that show distinct patterns and/or significant differences. The most challenging and rate-liming step, however, is to determine what the resulting lists of genes and/or transcripts biologically mean. Biomedical ontology and pathway-based functional enrichment analysis is widely used to interpret the functional role of tightly correlated or differentially expressed genes. The groups of genes are assigned to the associated biological annotations using Gene Ontology terms or biological pathways and then tested if they are significantly enriched with the corresponding annotations. Unlike previous approaches, Gene Set Enrichment Analysis takes quite the reverse approach by using pre-defined gene sets. Differential co-expression analysis determines the degree of co-expression difference of paired gene sets across different conditions. Outcomes in DNA microarray and RNA-Seq data can be transformed into the graphical structure that represents biological semantics. A number of biomedical annotation and external repositories including clinical resources can be systematically integrated by biological semantics within the framework of concept lattice analysis. This array of methods for biological knowledge assembly and interpretation has been developed during the past decade and clearly improved our biological understanding of large-scale genomic data from the high-throughput technologies.
U.S. Environmental Protection Agency — The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial...
Chen, Huajun; Chen, Xi; Gu, Peiqin; Wu, Zhaohui; Yu, Tong
Recently, huge amounts of data are generated in the domain of biology. Embedded with domain knowledge from different disciplines, the isolated biological resources are implicitly connected. Thus it has shaped a big network of versatile biological knowledge. Faced with such massive, disparate, and interlinked biological data, providing an efficient way to model, integrate, and analyze the big biological network becomes a challenge. In this paper, we present a general OWL (web ontology language) reasoning framework to study the implicit relationships among biological entities. A comprehensive biological ontology across traditional Chinese medicine (TCM) and western medicine (WM) is used to create a conceptual model for the biological network. Then corresponding biological data is integrated into a biological knowledge network as the data model. Based on the conceptual model and data model, a scalable OWL reasoning method is utilized to infer the potential associations between biological entities from the biological network. In our experiment, we focus on the association discovery between TCM and WM. The derived associations are quite useful for biologists to promote the development of novel drugs and TCM modernization. The experimental results show that the system achieves high efficiency, accuracy, scalability, and effectivity.
Full Text Available Recently, huge amounts of data are generated in the domain of biology. Embedded with domain knowledge from different disciplines, the isolated biological resources are implicitly connected. Thus it has shaped a big network of versatile biological knowledge. Faced with such massive, disparate, and interlinked biological data, providing an efficient way to model, integrate, and analyze the big biological network becomes a challenge. In this paper, we present a general OWL (web ontology language reasoning framework to study the implicit relationships among biological entities. A comprehensive biological ontology across traditional Chinese medicine (TCM and western medicine (WM is used to create a conceptual model for the biological network. Then corresponding biological data is integrated into a biological knowledge network as the data model. Based on the conceptual model and data model, a scalable OWL reasoning method is utilized to infer the potential associations between biological entities from the biological network. In our experiment, we focus on the association discovery between TCM and WM. The derived associations are quite useful for biologists to promote the development of novel drugs and TCM modernization. The experimental results show that the system achieves high efficiency, accuracy, scalability, and effectivity.
Madalina Cristina Tocan
Full Text Available The importance of knowledge-based economy (KBE in the XXI century isevident. In the article the reflection of knowledge on economy is analyzed. The main point is targeted to the analysis of characteristics of knowledge expression in economy and to the construction of structure of KBE expression. This allows understanding the mechanism of functioning of knowledge economy. Theauthors highlight the possibility to assess the penetration level of KBE which could manifest itself trough the existence of products of knowledge expression which could be created in acquisition, creation, usage and development of them. The latter phenomenon is interpreted as knowledge expression characteristics: economic and social context, human resources, ICT, innovative business and innovation policy. The reason for this analysis was based on the idea that in spite of the knowledge economy existence in all developed World countries adefinitive, universal list of indicators for mapping and measuring the KBE does not yet exists. Knowledge Expression Assessment Models are presented in the article.
Full Text Available Abstract Background Identification of transcription factors (TFs involved in a biological process is the first step towards a better understanding of the underlying regulatory mechanisms. However, due to the involvement of a large number of genes and complicated interactions in a gene regulatory network (GRN, identification of the TFs involved in a biology process remains to be very challenging. In reality, the recognition of TFs for a given a biological process can be further complicated by the fact that most eukaryotic genomes encode thousands of TFs, which are organized in gene families of various sizes and in many cases with poor sequence conservation except for small conserved domains. This poses a significant challenge for identification of the exact TFs involved or ranking the importance of a set of TFs to a process of interest. Therefore, new methods for recognizing novel TFs are desperately needed. Although a plethora of methods have been developed to infer regulatory genes using microarray data, it is still rare to find the methods that use existing knowledge base in particular the validated genes known to be involved in a process to bait/guide discovery of novel TFs. Such methods can replace the sometimes-arbitrary process of selection of candidate genes for experimental validation and significantly advance our knowledge and understanding of the regulation of a process. Results We developed an automated software package called TF-finder for recognizing TFs involved in a biological process using microarray data and existing knowledge base. TF-finder contains two components, adaptive sparse canonical correlation analysis (ASCCA and enrichment test, for TF recognition. ASCCA uses positive target genes to bait TFS from gene expression data while enrichment test examines the presence of positive TFs in the outcomes from ASCCA. Using microarray data from salt and water stress experiments, we showed TF-finder is very efficient in recognizing
It is increasingly realized that knowledge is the most important resource and that learning is the most important process in the economy. Sometimes this is expressed by coining the current era as characterised by a ‘knowledge based economy'. But this concept might be misleading by indicating...... that there is one common knowledge base on which economic activities can be built. In this paper we argue that it is more appropriate to see the economy as connecting to different ‘pools of knowledge'. The argument is built upon a conceptual framework where we make distinctions between private/public, local....../global, individual/collective and tacit/codified knowledge. The purpose is both ‘academic' and practical. Our analysis demonstrates the limits of a narrowly economic perspective on knowledge and we show that these distinctions have important implications both for innovation policy and for management of innovation....
Sturm, A. [Hamburgische Electacitaets-Werke AG Hamburg (Germany)
The establishment of maintenance strategies is of crucial significance for the reliability of a plant and the economic efficiency of maintenance measures. Knowledge about the condition of components and plants from the technical and business management point of view therefore becomes one of the fundamental questions and the key to efficient management and maintenance. A new way to determine the maintenance strategy can be called: Knowledge Based Maintenance. A simple method for determining strategies while taking the technical condition of the components of the production process into account to the greatest possible degree which can be shown. A software with an algorithm for Knowledge Based Maintenance leads the user during complex work to the determination of maintenance strategies for this complex plant components. (orig.)
Walters, Kristi L.
The importance of student motivation and its connection to other learning variables (i.e., attitudes, knowledge, persistence, attendance) is well established. Collaborative work at the undergraduate level has been recognized as a valuable tool in large courses. However, motivation and collaborative group work have rarely been combined. This project utilized student motivation to learn biology to place non-major biology undergraduates in collaborative learning groups at East Carolina University, a mid-sized southeastern American university, to determine the effects of this construct on student learning. A pre-test measuring motivation to learn biology, attitudes toward biology, perceptions of biology and biologists, views of science, and content knowledge was administered. A similar post-test followed as part of the final exam. Two sections of the same introductory biology course (n = 312) were used and students were divided into homogeneous and heterogeneous groups (based on their motivation score). The heterogeneous groups (n = 32) consisted of a mixture of different motivation levels, while the homogeneous groups (n = 32) were organized into teams with similar motivation scores using tiers of high-, middle-, and low-level participants. Data analysis determined mixed perceptions of biology and biologists. These include the perceptions biology was less intriguing, less relevant, less practical, less ethical, and less understandable. Biologists were perceived as being neat and slightly intelligent, but not very altruistic, humane, ethical, logical, honest, or moral. Content knowledge scores more than doubled from pre- to post-test. Half of the items measuring views of science were not statistically significantly different from pre- to post-test. Many of the factors for attitudes toward biology became more agreeable from pre- to post-test. Correlations between motivation scores, participation levels, attendance rates, and final course grades were examined at both the
Full Text Available The main assumptions and functions of proposed Foundry Knowledge Base (FKB are presented in this paper. FKB is a framework forinformation exchange of casting products and manufacturing methods. We use CMS (Content Management System to develope andmaintain our web-based system. The CastML – XML dialect developed by authors for description of casting products and processes – isused as a tool for information interchange between ours and outside systems, while SQL is used to store and edit knowledge rules and alsoto solve the basic selection problems in the rule-based module. Besides the standard functions (companies data, news, events, forums and media kit, our website contains a number of nonstandard functions; the intelligent search module based on expert system is the main advantage of our solution. FKB is to be a social portal which content will be developed by foundry community.
Haan, Hendrik Wietze de; Hesselink, Wim H.; Renardel de Lavalette, Gerard R.
A knowledge-based program is a high-level description of the behaviour of agents in terms of knowledge that an agent must have before (s)he may perform an action. The definition of the semantics of knowledge-based programs is problematic, since it involves a vicious circle; the knowledge of an agent
In this paper I propose the knowledge base as a fruitful way to apprehend journalism. With the claim that the majority of practice is anchored in knowledge – understood as 9 categories of rationales, forms and levels – this knowledge base appears as a contextual look at journalists’ knowledge......, and place. As an analytical framework, the knowledge base is limited to understand the practice of newspaper journalists, but, conversely, the knowledge base encompasses more general beginnings through the inclusion of overall structural relationships in the media and journalism and general theories...... on practice and knowledge. As the result of an abductive reasoning is a theory proposal, there is a need for more deductive approaches to test the validity of this knowledge base claim. It is thus relevant to investigate which rationales are included in the knowledge base of journalism, as the dimension does...
Wadouh, Julia; Liu, Ning; Sandmann, Angela; Neuhaus, Birgit J.
Knowledge structure is an important aspect for defining students' competency in biology learning, but how knowledge structure is influenced by the teaching process in naturalistic biology classroom settings has scarcely been empirically investigated. In this study, 49 biology lessons in the teaching unit "blood and circulatory system" in…
Purves, R. B.; Carnes, James R.; Cutts, Dannie E.
One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.
Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.
In this paper, we study the problem of exchanging knowledge between a source and a target knowledge base (KB), connected through mappings. Differently from the traditional database exchange setting, which considers only the exchange of data, we are interested in exchanging implicit knowledge. As rep
Barrera, Junior; Cesar, Roberto M; Ferreira, João E; Gubitoso, Marco D
This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported.
Morante, Silvia; Rossi, Giancarlo
The purpose of this work is to reconsider and critically discuss the conceptual foundations of modern biology and bio-sciences in general, and provide an epistemological guideline to help framing the teaching of these disciplines and enhancing the quality of their presentation in High School, Master and Ph.D. courses. After discussing the methodological problems that arise in trying to construct a sensible and useful scientific approach applicable to the study of living systems, we illustrate what are the general requirements that a workable scheme of investigation should meet to comply with the principles of the Galilean method. The amazing success of basic physics, the Galilean science of election, can be traced back to the development of a radically " reductionistic" approach in the interpretation of experiments and a systematic procedure tailored on the paradigm of " falsifiability" aimed at consistently incorporating new information into extended models/theories. The development of bio-sciences seems to fit with neither reductionism (the deeper is the level of description of a biological phenomenon the more difficult looks finding general and simple laws), nor falsifiability (not always experiments provide a yes-or-no answer). Should we conclude that biology is not a science in the Galilean sense? We want to show that this is not so. Rather in the study of living systems, the novel interpretative paradigm of " complexity" has been developed that, without ever conflicting with the basic principles of physics, allows organizing ideas, conceiving new models and understanding the puzzling lack of reproducibility that seems to affect experiments in biology and in other modern areas of investigation. In the delicate task of conveying scientific concepts and principles to students as well as in popularising bio-sciences to a wider audience, it is of the utmost importance for the success of the process of learning to highlight the internal logical consistency of
Ardan, Andam S.
The purposes of this study were (1) to describe the biology learning such as lesson plans, teaching materials, media and worksheets for the tenth grade of High School on the topic of Biodiversity and Basic Classification, Ecosystems and Environment Issues based on local wisdom of Timorese; (2) to analyze the improvement of the environmental…
This paper tries to present the existent relation between knowledge society and knowledge based economy. We will identify the main pillars of knowledge society and present their importance for the development of knowledge societies. Further, we will present two perspectives over knowledge societies, respectively science and learning perspectives, that directly affects knowledge based economies. At the end, we will conclude by identifying some important questions ...
Full Text Available This paper tries to present the existent relation between knowledge society and knowledge based economy. We will identify the main pillars of knowledge society and present their importance for the development of knowledge societies. Further, we will present two perspectives over knowledge societies, respectively science and learning perspectives, that directly affects knowledge based economies. At the end, we will conclude by identifying some important questions that must be answered regarding this new social paradigm.
Working Paper No. 256 is published as "The Knowledge Based Information Economy" (authors: Gunnar Eliasson, Stefan Fölster, Thomas Lindberg, Tomas Pousette and Erol Taymaz). Stockholm: Industrial Institute for Economic and Social Research and Telecon, 1990.
Roy, Claudette; Hay, D. Robert
Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.
Greene, Casey S; Troyanskaya, Olga G
Integrative systems biology is an approach that brings together diverse high-throughput experiments and databases to gain new insights into biological processes or systems at molecular through physiological levels. These approaches rely on diverse high-throughput experimental techniques that generate heterogeneous data by assaying varying aspects of complex biological processes. Computational approaches are necessary to provide an integrative view of these experimental results and enable data-driven knowledge discovery. Hypotheses generated from these approaches can direct definitive molecular experiments in a cost-effective manner. By using integrative systems biology approaches, we can leverage existing biological knowledge and large-scale data to improve our understanding of as yet unknown components of a system of interest and how its malfunction leads to disease.
Ameny, Gloria Millie Apio
Adequate understanding of the nature of science is a major goal of science education. Understanding of the evolutionary nature of biological knowledge is a means of reinforcing biology students' understanding of the nature of science. It provides students with the philosophical basis, explanatory ideals, and subject matter-specific views of what counts as a scientifically-acceptable biological explanation. This study examined 121 college introductory biology and advanced zoology students for their conceptions related to the nature of biological knowledge. A 60-item Likert-scale questionnaire called the Nature of Biological Knowledge Scale and student interviews were used as complementary research instruments. Firstly, the study showed that 80--100% of college biology students have an adequate understanding of scientific methods, and that a similar percentage of students had learned the theory of evolution by natural selection in their biology courses. Secondly, the study showed that at least 60--80% of the students do not understand the importance of evolution in biological knowledge. Yet the study revealed that a statistically significant positive correlation exist among students' understanding of natural selection, divergent, and convergent evolutionary models. Thirdly, the study showed that about 20--58% of college students hold prescientific conceptions which, in part, are responsible for students' lack of understanding of the nature of biological knowledge. A statistically significant negative correlation was found among students' prescientific conceptions about basis of biological knowledge and nature of change in biological processes, and their understanding of natural selection and evolutionary models. However, the study showed that students' characteristics such as gender, age, major, or years in college have no statistically significant influence on students' conceptions related to the nature of biological knowledge. Only students' depth of biological
Coulet, Adrien; Smaïl-Tabbone, Malika; Napoli, Amedeo; Devignes, Marie-Dominique
One current challenge in biomedicine is to analyze large amounts of complex biological data for extracting domain knowledge. This work holds on the use of knowledge-based techniques such as knowledge discovery (KD) and knowledge representation (KR) in pharmacogenomics, where knowledge units represent genotype-phenotype relationships in the context of a given treatment. An objective is to design knowledge base (KB, here also mentioned as an ontology) and then to use it in the KD process itself. A method is proposed for dealing with two main tasks: (1) building a KB from heterogeneous data related to genotype, phenotype, and treatment, and (2) applying KD techniques on knowledge assertions for extracting genotype-phenotype relationships. An application was carried out on a clinical trial concerned with the variability of drug response to montelukast treatment. Genotype-genotype and genotype-phenotype associations were retrieved together with new associations, allowing the extension of the initial KB. This experiment shows the potential of KR and KD processes, especially for designing KB, checking KB consistency, and reasoning for problem solving.
Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi
Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…
WANG Hui-jin; HU Hua; LI Qing
Search engines have greatly helped us to find thedesired information from the Intemet. Most search engines use keywords matching technique. This paper discusses a Dynamic Knowledge Base based Search Engine (DKBSE), which can expand the user's query using the keywords' concept or meaning. To do this, the DKBSE needs to construct and maintain the knowledge base dynamically via the system's searching results and the user's feedback information. The DKBSE expands the user's initial query using the knowledge base, and returns the searched information after the expanded query.
Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.
Demaid, Adrian; Edwards, Lyndon
Discusses the nature and current state of knowledge-based systems and expert systems. Describes an expert system from the viewpoints of a computer programmer and an applications expert. Addresses concerns related to materials selection and forecasts future developments in the teaching of materials engineering. (ML)
Sebastian Ion CEPTUREANU
Full Text Available In the new economy, knowledge is an essential component of economic and social systems. The organizational focus has to be on building knowledge-based management, development of human resource and building intellectual capital capabilities. Knowledge-based management is defined, at company level, by economic processes that emphasize creation, selling, buying, learning, storing, developing, sharing and protection of knowledge as a decisive condition for profit and long-term sustainability of the company. Hence, knowledge is, concurently, according to a majoritiy of specialists, raw material, capital, product and an essential input. Knowledge-based communities are one of the main constituent elements of a framework for knowledge based management. These are peer networks consisting of practitioners within an organization, supporting each other to perform better through the exchange and sharing of knowledge. Some large companies have contributed or supported the establishment of numerous communities of practice, some of which may have several thousand members. They operate in different ways, are of different sizes, have different areas of interest and addresses knowledge at different levels of its maturity. This article examines the role of knowledge-based communities from the perspective of knowledge based management, given that the arrangements for organizational learning, creating, sharing, use of knowledge within organizations become more heterogeneous and take forms more difficult to predict by managers and specialists.
Bruffaerts, R.; Weer, A.S. De; Grauwe, S.M.T. De; Thys, M.; Dries, E.; Thijs, V.; Sunaert, S.; Vandenbulcke, M.; Deyne, S. De; Storms, G.; Vandenberghe, R.
We investigated the critical contribution of right ventral occipitotemporal cortex to knowledge of visual and functional-associative attributes of biological and non-biological entities and how this relates to category-specificity during confrontation naming. In a consecutive series of 7 patients wi
Ziegler, Brittany; Montplaisir, Lisa
Students who lack metacognitive skills can struggle with the learning process. To be effective learners, students should recognize what they know and what they do not know. This study examines the relationship between students' perception of their knowledge and determined knowledge in an upper-level biology course utilizing a pre/posttest…
TANG Zhi-jie; YANG Bao-an; ZHANG Ke-jing
Based on the knowledge representation and knowledge reasoning, this paper addresses the creation of the multiattribute knowledge base on the basis of hybrid knowledge representation, with the help of object-oriented programming language and relational database. Compared with general knowledge base, multi-attribute knowledge base can enhance the ability of knowledge processing and application;integrate the heterogeneous knowledge, such as model,symbol, case-based sample knowledge; and support the whole decision process by integrated reasoning.
Morowitz, H.J.; Smith, T.
Current understanding of biology involves complex relationships rooted in enormous amounts of data. These data include entries from biochemistry, ecology, genetics, human and veterinary medicine, molecular structure studies, agriculture, embryology, systematics, and many other disciplines. The present wealth of biological data goes beyond past accumulations now include new understandings from molecular biology. Several important biological databases are currently being supported, and more are planned; however, major problems of interdatabase communication and management efficiency abound. Few scientists are currently capable of keeping up with this ever-increasing wealth of knowledge, let alone searching it efficiently for new or unsuspected links and important analogies. Yet this is what is required if the continued rapid generation of such data is to lead most effectively to the major conceptual, medical, and agricultural advances anticipated over the coming decades in the United States. The opportunity exists to combine the potential of modern computer science, database management, and artificial intelligence in a major effort to organize the vast wealth of biological and clinical data. The time is right because the amount of data is still manageable even in its current highly-fragmented form; important hardware and computer science tools have been greatly improved; and there have been recent fundamental advances in our comprehension of biology. This latter is particularly true at the molecular level where the information for nearly all higher structure and function is encoded. The organization of all biological experimental data coordinately within a structure incorporating our current understanding - the Matrix of Biological Knowledge - will provide the data and structure for the major advances foreseen in the years ahead.
Mthethwa-Kunene, Eunice; Oke Onwu, Gilbert; de Villiers, Rian
This study explored the pedagogical content knowledge (PCK) and its development of four experienced biology teachers in the context of teaching school genetics. PCK was defined in terms of teacher content knowledge, pedagogical knowledge and knowledge of students' preconceptions and learning difficulties. Data sources of teacher knowledge base included teacher-constructed concept maps, pre- and post-lesson teacher interviews, video-recorded genetics lessons, post-lesson teacher questionnaire and document analysis of teacher's reflective journals and students' work samples. The results showed that the teachers' individual PCK profiles consisted predominantly of declarative and procedural content knowledge in teaching basic genetics concepts. Conditional knowledge, which is a type of meta-knowledge for blending together declarative and procedural knowledge, was also demonstrated by some teachers. Furthermore, the teachers used topic-specific instructional strategies such as context-based teaching, illustrations, peer teaching, and analogies in diverse forms but failed to use physical models and individual or group student experimental activities to assist students' internalization of the concepts. The finding that all four teachers lacked knowledge of students' genetics-related preconceptions was equally significant. Formal university education, school context, journal reflection and professional development programmes were considered as contributing to the teachers' continuing PCK development. Implications of the findings for biology teacher education are briefly discussed.
The aim of this paper is to emphasize the importance of knowledge based economy, in this time characterized by fast changes and sometimes radical changes, it is impossible to resist without adapting, both people and the organizations too. The matter of the paper develops knowledge based economy concept: elements, definitions of the knowledge based economy, stages and the main knowledge codification. In the end of the paper, the author presents the importance of economy knowledge, in Romanian ...
Tikidjian, Raffi; James, Mark; Mackey, Ryan
The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.
In the context of contemporary economy logistics changes refer to the emergence of new inter-organizational logistics structures such as logistics networks, and to a number of alterations in organizations’ vision and conduct regarding the role and importance of knowledge. The research focuses on identifying the instruments promoting inter-organizational knowledge transfer within logistics networks provided that each organization relies on its own knowledge and organizational skills to ensure ...
As a system of knowledge, nursing has utilized a range of subjects and reconstituted them to reflect the thinking and practice of health care. Often drawn to a holistic model, nursing finds it difficult to resist the reductionist tendencies in biological and medical thinking. In this paper I will propose a relational approach to knowledge that is able to address this issue. The paper argues that biology is not characterized by one stable theory but is often a contentious topic and employs philosophically diverse models in its scientific research. Biology need not be seen as a reductionist science, but reductionism is nonetheless an important current within biological thinking. These reductionist currents can undermine nursing knowledge in four main ways. Firstly, that the conclusions drawn from reductionism go far beyond their data based on an approach that prioritizes biological explanations and eliminates others. Secondly, that the methods employed by biologists are sometimes weak, and the limitations are insufficiently acknowledged. Thirdly, that the assumptions that drive the research agenda are problematic, and finally that uncritical application of these ideas can be potentially disastrous for nursing practice. These issues are explored through an examination of the problems reductionism poses for the issue of gender, mental health, and altruism. I then propose an approach based on critical realism that adopts an anti-reductionist philosophy that utilizes the conceptual tools of emergence and a relational ontology.
The aim, characteristics and requirements of stampability evaluation are identified.As sam-pability evaluation is highly skill-intensive and requires a wide variety of design expertise and knowledge, a knowledge based system is proposed for implementing the stampability evaluation.The stampability eval uation knowledge representation,and processing phases are illustrated. A case study demonstrates the feasibility of the knowledge based approach to stampability evalu-ation.
Hadjichambis, Andreas Ch.; Georgiou, Yiannis; Paraskeva-Hadjichambi, Demetra; Kyza, Eleni A.; Mappouras, Demetrios
Despite the importance of understanding how the human reproductive system works, adolescents worldwide exhibit weak conceptual understanding, which leads to serious risks, such as unwanted pregnancies and sexually transmitted diseases. Studies focusing on the development and evaluation of inquiry-based learning interventions, promoting the…
Full Text Available Biological interpretability is a key requirement for the output of microarray data analysis pipelines. The most used pipeline first identifies a gene signature from the acquired measurements and then uses gene enrichment analysis as a tool for functionally characterizing the obtained results. Recently Knowledge Driven Variable Selection (KDVS, an alternative approach which performs both steps at the same time, has been proposed. In this paper, we assess the effectiveness of KDVS against standard approaches on a Parkinson’s Disease (PD dataset. The presented quantitative analysis is made possible by the construction of a reference list of genes and gene groups associated to PD. Our work shows that KDVS is much more effective than the standard approach in enhancing the interpretability of the obtained results.
Graciela da Silva Oliveira
Full Text Available The aim of this study was to verify topics of the biological evolution theory Brazilian students affirm to know and their relation with variables such as sex, age, geographical localization, socioeconomical aspects, religion and science. 2.404 high school students (55.1% girls enrolled in 78 Brazilian schools took part of the research. The data was generated through a questionnaire and analyzed using the software Statistical Package for Social Science (SPSS version 18.0. The results point out that the knowledge of topics about evolution is low among students and influenced by the variables tested, the associations identified happened in a diversified way, and in lower or higher intensity according to the context studied.
陶雪红; 孙伟; 等
This paper gives an outline of knowledge base revision and some recently presented complexity results about propostitional knowledge base revision.Different methods for revising propositional knowledge base have been proposed recently by several researchers,but all methods are intractable in the general case.For practical application,this paper presents a revision method for special case,and gives its corresponding polynomial algorithm.
Mthethwa-Kunene, Eunice; Onwu, Gilbert Oke; de Villiers, Rian
This study explored the pedagogical content knowledge (PCK) and its development of four experienced biology teachers in the context of teaching school genetics. PCK was defined in terms of teacher content knowledge, pedagogical knowledge and knowledge of students' preconceptions and learning difficulties. Data sources of teacher knowledge base…
Heupel, Wolfgang-Moritz; Drenckhahn, Detlev
Central to modern Histochemistry and Cell Biology stands the need for visualization of cellular and molecular processes. In the past several years, a variety of techniques has been achieved bridging traditional light microscopy, fluorescence microscopy and electron microscopy with powerful software-based post-processing and computer modeling. Researchers now have various tools available to investigate problems of interest from bird's- up to worm's-eye of view, focusing on tissues, cells, proteins or finally single molecules. Applications of new approaches in combination with well-established traditional techniques of mRNA, DNA or protein analysis have led to enlightening and prudent studies which have paved the way toward a better understanding of not only physiological but also pathological processes in the field of cell biology. This review is intended to summarize articles standing for the progress made in "histo-biochemical" techniques and their manifold applications.
Knight, K; Haines, M G; Hatzivassiloglou, V; Hovy, E; Iida, M; Luk, S K; Okumura, A; Whitney, R; Yamada, K; Knight, Kevin; Chander, Ishwar; Haines, Matthew; Hatzivassiloglou, Vasileios; Hovy, Eduard; Iida, Masayo; Luk, Steve K.; Okumura, Akitoshi; Whitney, Richard; Yamada, Kenji
We summarize recent machine translation (MT) research at the Information Sciences Institute of USC, and we describe its application to the development of a Japanese-English newspaper MT system. Our work aims at scaling up grammar-based, knowledge-based MT techniques. This scale-up involves the use of statistical methods, both in acquiring effective knowledge resources and in making reasonable linguistic choices in the face of knowledge gaps.
Konsynski, Benn R.; And Others
A series of articles addresses issues concerning decision support and knowledge based systems. Topics covered include knowledge-based systems for information centers; object oriented systems; strategic information systems case studies; user perception; manipulation of certainty factors by individuals and expert systems; spreadsheet program use;…
The maintenance sequences of a knowledge base and their limits are introduced.Some concepts used in knowledge base maintenance,such as new laws,user's rejections,and reconstructions of a knowledge base are defined;the related theorems are proved.A procedure is defined using transition systems;it generates maintenance sequences for a given user's model and a knowledge base.It is proved that all sequences produced by the procedure are convergent,and their limit is the set of true sentences of the model.Some computational aspects of reconstructions are studied.An R-calculus is given to deduce a reconstruction when a knowledge base meets a user's rejection.The work is compared with AGM's theory of belief revision.
Chen, Yifei; Guo, Hongjian; Liu, Feng; Manderick, Bernard
Interaction Article Classification (IAC) is a specific text classification application in biological domain that tries to find out which articles describe Protein-Protein Interactions (PPIs) to help extract PPIs from biological literature more efficiently. However, the existing text representation and feature weighting schemes commonly used for text classification are not well suited for IAC. We capture and utilise biological domain knowledge, i.e. gene mentions also known as protein or gene names in the articles, to address the problem. We put forward a new gene mention order-based approach that highlights the important role of gene mentions to represent the texts. Furthermore, we also incorporate the information concerning gene mentions into a novel feature weighting scheme called Gene Mention-based Term Frequency (GMTF). By conducting experiments, we show that using the proposed representation and weighting schemes, our Interaction Article Classifier (IACer) performs better than other leading systems for the moment.
Full Text Available Knowledge Based Processes, KBP, have been introduced to facilitate knowledge transfer among organizational and corporate employees. They stress on the key role of socialization and group meetings in promoting effective knowledge transfer. Meetings within virtual environment are becoming more and more used in today’s organizational settings. There are many conferencing tools that are used to facilitate such meetings. However, providing participants with a co nferencing or chatting tool and expecting them to transfer their knowledge to each other in a convenient way, could lead to many disappointments. CSCL, Computer Support for Collaborative Learning, is relatively a new discipline within teaching and learning field. Applying CSCL techniques and technologies in Knowledge Base Systems, KBS, would be a reasonable option since teaching and learning is essentially a process of knowledge transfer between instructors and students or collaboratively between students themselves. In this research we are focusing on the usage of Collaboration Script, CS, as a way to support knowledge transfer sessions in a structured and formal way. It facilitates sharing tacit knowledge via guided interpersonal interactions and turning them to explicit knowledge by capturing and retrieving these interactions. In this paper we are presenting the scripting structure of three common collaboration techniques used in Knowledge Base processes. As a proof of concept, two of these techniques are described using the collaboration scripting language, ColScript, that was introduced by us in an earlier research.
Manning, J; Broughton, V; McConnell, E A
The challenge in nursing education is to create a learning environment that enables students to learn new knowledge, access previously acquired information from a variety of disciplines, and apply this newly constructed knowledge to the complex and constantly changing world of practice. Faculty at the University of South Australia, School of Nursing, City Campus describe the use of reality based scenarios to acquire domain-specific knowledge and develop well connected associative knowledge networks, both of which facilitate theory based practice and the student's transition to the role of registered nurse.
Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.
This book presents innovative and high-quality research on the implementation of conceptual frameworks, strategies, techniques, methodologies, informatics platforms and models for developing advanced knowledge-based systems and their application in different fields, including Agriculture, Education, Automotive, Electrical Industry, Business Services, Food Manufacturing, Energy Services, Medicine and others. Knowledge-based technologies employ artificial intelligence methods to heuristically address problems that cannot be solved by means of formal techniques. These technologies draw on standard and novel approaches from various disciplines within Computer Science, including Knowledge Engineering, Natural Language Processing, Decision Support Systems, Artificial Intelligence, Databases, Software Engineering, etc. As a combination of different fields of Artificial Intelligence, the area of Knowledge-Based Systems applies knowledge representation, case-based reasoning, neural networks, Semantic Web and TICs used...
Full Text Available Nowadays, the world economies are rapidly moving towards being more Knowledge-based Economy (KBE and supporting the force of knowledge as a vital component of economic growth. This recent acceleration in the transition to Knowledge-based Economy in the world, has affected regional economic performance. In this paper, we surveyed the regional convergence in Knowledge-based Economy for selected Asia and pacific countries. We used a growth model in Barro and Sala-i-Martin framework (1995 for the period of 1995-2009. It includes a panel data set consisting of the annual growth rate of GDP per capita for selected Asia and pacific countries and a group of indicators that define the situation of Knowledge-based Economy in countries. The empirical results indicate that the absolute and the conditional convergence are not rejected for selected countries. The investigation on robustness of the model results confirms the existence of regional convergence for studied countries.
Capraro, Gerard T.; Wicks, Michael C.
An airborne ground looking radar sensor's performance may be enhanced by selecting algorithms adaptively as the environment changes. A short description of an airborne intelligent radar system (AIRS) is presented with a description of the knowledge based filter and detection portions. A second level of artificial intelligence (AI) processing is presented that monitors, tests, and learns how to improve and control the first level. This approach is based upon metacognition, a way forward for developing knowledge based systems.
Full Text Available Abstract Background Knowledge translation is an interactive process of knowledge exchange between health researchers and knowledge users. Given that the health system is broad in scope, it is important to reflect on how definitions and applications of knowledge translation might differ by setting and focus. Community-based organizations and their practitioners share common characteristics related to their setting, the evidence used in this setting, and anticipated outcomes that are not, in our experience, satisfactorily reflected in current knowledge translation approaches, frameworks, or tools. Discussion Community-based organizations face a distinctive set of challenges and concerns related to engaging in the knowledge translation process, suggesting a unique perspective on knowledge translation in these settings. Specifically, community-based organizations tend to value the process of working in collaboration with multi-sector stakeholders in order to achieve an outcome. A feature of such community-based collaborations is the way in which 'evidence' is conceptualized or defined by these partners, which may in turn influence the degree to which generalizable research evidence in particular is relevant and useful when balanced against more contextually-informed knowledge, such as tacit knowledge. Related to the issues of evidence and context is the desire for local information. For knowledge translation researchers, developing processes to assist community-based organizations to adapt research findings to local circumstances may be the most helpful way to advance decision making in this area. A final characteristic shared by community-based organizations is involvement in advocacy activities, a function that has been virtually ignored in traditional knowledge translation approaches. Summary This commentary is intended to stimulate further discussion in the area of community-based knowledge translation. Knowledge translation, and exchange
Wu Xiaofan; Zhou Liang; Zhang Lei; Li Lingzhi; Ding Qiulin
Based on topic maps, a preprocessing scheme using similarity comparision is presented and applied in knowledge management.Topic and occurrence-oriented merging algorithm is also introduced to implement knowledge integration for the sub-system. An Omnigator-supported example from an aeroaustic institute is utilised to validate the preprocessing method and the result indicates it can speed up the research schedule.
A computer-aided design system, which is base on the knowledge of draping and frame-producer methods, is put forward. All kinds of knowledge applied here are abstracted from fashion design, where a complicate object is divided into several simple parts according to the body parts. And then a mathematical model of the display for real fashion illustration is elucidated.
1. EXECUTIVE SUMMARY • It is proposed that a consortium for research on and development of tools for the knowledge-based organization be established at Learning Lab Denmark. • The knowledge-based organizations must refine and use the knowledge held by its members and not confuse it with the infor......1. EXECUTIVE SUMMARY • It is proposed that a consortium for research on and development of tools for the knowledge-based organization be established at Learning Lab Denmark. • The knowledge-based organizations must refine and use the knowledge held by its members and not confuse...... it with the information held by its computers. Knowledge specialists cannot be managed and directed in the classical sense. The organization needs to be rehumanized and conditions for reflection, learning and autonomy enhanced, so that its collective knowledge may be better used to create real value for its stakeholders....... • To help organizations do this, tools need to be researched, sophisticated or invented. Broadly conceived, tools include ideas, such as theories, missions and business plans, practices, such as procedures and behaviors, and instruments, such as questionnaires, indicators, agendas and methods...
Chiang, Li-Chi; Yeh, Mei-Ling; Su, Sui-Lung
U.S. President Obama announced a new era of precision medicine in the Precision Medicine Initiative (PMI). This initiative aims to accelerate the progress of personalized medicine in light of individual requirements for prevention and treatment in order to improve the state of individual and public health. The recent and dramatic development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as genomics, microbiome, diverse biomarkers, and even pharmacogenomics), and computational tools for analyzing big data are maximizing the potential benefits of precision medicine. Nursing science should follow and keep pace with this trend in order to develop empirical knowledge and expertise in the area of personalized nursing care. Nursing scientists must encourage, examine, and put into practice innovative research on precision nursing in order to provide evidence-based guidance to clinical practice. The applications in personalized precision nursing care include: explanations of personalized information such as the results of genetic testing; patient advocacy and support; anticipation of results and treatment; ongoing chronic monitoring; and support for shared decision-making throughout the disease trajectory. Further, attention must focus on the family and the ethical implications of taking a personalized approach to care. Nurses will need to embrace the paradigm shift to precision nursing and work collaboratively across disciplines to provide the optimal personalized care to patients. If realized, the full potential of precision nursing will provide the best chance for good health for all.
LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang
A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.
Davis, Stan; Botkin, Jim
Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)
陆汝钤; 石纯一; 张松懋; 毛希平; 徐晋晖; 杨萍; 范路
Common sense processing has been the key difficulty in Al community. Through analyzing various research methods on common sense, a large-scale agent-oriented commonsense knowledge base is described in this paper. We propose a new type of agent——CBS agent, specify common sense oriented semantic network descriptive language-Csnet, augment Prolog for common sense, analyze the ontology structure, and give the execution mechanism of the knowledge base.
This paper describes the work done on Matrix Browser, which is a recently developed graphical user interface to explore and navigate complex networked information spaces. This approach presents a new way of navigating information nets in windows explorer like widget. The problem on hand was how to export arbitrary knowledge bases in Matrix Browser. This was achieved by identifying the relationships present in knowledge bases and then by forming the hierarchies from this data and these hierarc...
Michael L. Doll; Gerald Hendrickson; Gerard Lagos; Russell Pylkki; Chris Christensen; Charlie Cureija
This report will discuss issues relevant to Insulating Glass (IG) durability performance by presenting the observations and developed conclusions in a logical sequential format. This concluding effort discusses Phase II activities and focuses on beginning to quantifying IG durability issues while continuing the approach presented in the Phase I activities (Appendix 1) which discuss a qualitative assessment of durability issues. Phase II developed a focus around two specific IG design classes previously presented in Phase I of this project. The typical box spacer and thermoplastic spacer design including their Failure Modes and Effect Analysis (FMEA) and Fault Tree diagrams were chosen to address two currently used IG design options with varying components and failure modes. The system failures occur due to failures of components or their interfaces. Efforts to begin quantifying the durability issues focused on the development and delivery of an included computer based IG durability simulation program. The focus/effort to deliver the foundation for a comprehensive IG durability simulation tool is necessary to address advancements needed to meet current and future building envelope energy performance goals. This need is based upon the current lack of IG field failure data and the lengthy field observation time necessary for this data collection. Ultimately, the simulation program is intended to be used by designers throughout the current and future industry supply chain. Its use is intended to advance IG durability as expectations grow around energy conservation and with the growth of embedded technologies as required to meet energy needs. In addition the tool has the immediate benefit of providing insight for research and improvement prioritization. Included in the simulation model presentation are elements and/or methods to address IG materials, design, process, quality, induced stress (environmental and other factors), validation, etc. In addition, acquired data
ZHANG Shijie; SONG Laigang
Sánchez Reyes, Patricia Margarita
Using the principles of biology, along with engineering and with the help of computer, scientists manage to copy. DNA sequences from nature and use them to create new organisms. DNA is created through engineering and computer science managing to create life inside a laboratory. We cannot dismiss the role that synthetic biology could lead in…
2005. 4. Holger Kunz and Thorsten Schaaf. General and specific formalization approach for a balanced scorecard : An expert system with application in...consult in China. Health care system and the modern health in- frastructure play an essential role in recent years . However, self- management for...large question and answer archives. In Proceedings of the 14th ACM international conference on Information and knowledge management , pages 84–90. ACM
The construction of oceanographic ontologies is fundamental to the "digital ocean". Therefore, on the basis of introduction of new concept of oceanographic ontology, an oceanographic ontology-based spatial knowledge query (OOBSKQ) method was proposed and developed. Because the method uses a natural language to describe query conditions and the query result is highly integrated knowledge,it can provide users with direct answers while hiding the complicated computation and reasoning processes, and achieves intelligent,automatic oceanographic spatial information query on the level of knowledge and semantics. A case study of resource and environmental application in bay has shown the implementation process of the method and its feasibility and usefulness.
Odom, Arthur L.; Barrow, Lloyd H.
The purpose of this study was to investigate students' understanding about scientifically acceptable content knowledge by exploring the relationship between knowledge of diffusion and osmosis and the students' certainty in their content knowledge. Data was collected from a high school biology class with the Diffusion and Osmosis Diagnostic Test…
Sebastian Ion CEPTUREANU
Full Text Available Knowledge and ability to create it, access and use effectively, has long been both an instrument of innovation and competition and a key economic and social development. However, a series of dramatic changes in recent years have increased the importance of knowledge for generating competitive advantage. Ability to process and use information globally and instantly increased exponentially in recent years due to a combination of scientific progress in computing and distributed computing, exacerbation of competition, innovation in all its forms and cuts of operating costs in global communication networks. As barriers to access knowledge regarding a process, product or market are gradually decreasing (distance, geographical features, and costs, knowledge and skills are becoming increasingly a key to competitiveness, both locally and globally. This paper, based on a survey of 551 Romanian companies, address a sensitive issue of both business and academic fields – perception of knowledge based economy in Romanian companies. Its conclusion can guide decision makers in Romania to develop an integrated approach to foster knowledge based economy in our country.
Huuskonen, P.J. [VTT Electronics, Oulu (Finland). Embedded Software
This thesis deals with computer explanation of knowledge related to design and operation of industrial plants. The needs for explanation are motivated through case studies and literature reviews. A general framework for analysing plant explanations is presented. Prototypes demonstrate key mechanisms for implementing parts of the framework. Power plants, steel mills, paper factories, and high energy physics control systems are studied to set requirements for explanation. The main problems are seen to be either lack or abundance of information. Design knowledge in particular is found missing at plants. Support systems and automation should be enhanced with ways to explain plant knowledge to the plant staff. A framework is formulated for analysing explanations of plant knowledge. It consists of three parts: 1. a typology of explanation, organised by the class of knowledge (factual, functional, or strategic) and by the target of explanation (processes, automation, or support systems), 2. an identification of explanation tasks generic for the plant domain, and 3. an identification of essential model types for explanation (structural, behavioural, functional, and teleological). The tasks use the models to create the explanations of the given classes. Key mechanisms are discussed to implement the generic explanation tasks. Knowledge representations based on objects and their relations form a vocabulary to model and present plant knowledge. A particular class of models, means-end models, are used to explain plant knowledge. Explanations are generated through searches in the models. Hypertext is adopted to communicate explanations over dialogue based on context. The results are demonstrated in prototypes. The VICE prototype explains the reasoning of an expert system for diagnosis of rotating machines at power plants. The Justifier prototype explains design knowledge obtained from an object-oriented plant design tool. Enhanced access mechanisms into on-line documentation are
Ramona – Diana Leon
Full Text Available Increasingly more literature mention that in the current competitive environment, knowledge have become the main source of the competitive advantages, while recent researches regarding economic growth and development have defined knowledge as being the most critical resource of the emerging countries.Therefore, the organizations interest for knowledge has increased, the latter being defined as knowledge management process in order to meet existing needs, to identify and exploit existing and/or acquired knowledge and developing new opportunities.In other words, knowledge management facilitates productive information usage, intelligence growth, storing intellectual capital, strategic planning, flexible acquisition, collection of best practices, increasing the likelihood of being successful as well as a more productive collaboration within the company.In order to benefit from all these advantages, it is required the usage of specific tools including models and systems to stimulate the creation, dissemination and use of knowledge held by each employee and the organization as a whole.
Yisau, J I; Adagbada, A O; Bamidele, T; Fowora, M; Brai, B I C; Adebesin, O; Bamidele, M; Fesobi, T; Nwaokorie, F O; Ajayi, A; Smith, S I
The deployment of molecular biology techniques for diagnosis and research in Nigeria is faced with a number of challenges, including the cost of equipment and reagents coupled with the dearth of personnel skilled in the procedures and handling of equipment. Short molecular biology training workshops were conducted at the Nigerian Institute of Medical Research (NIMR), to improve the knowledge and skills of laboratory personnel and academics in health, research, and educational facilities. Five-day molecular biology workshops were conducted annually between 2011 and 2014, with participants drawn from health, research facilities, and the academia. The courses consisted of theoretical and practical sessions. The impact of the workshops on knowledge and skill acquisition was evaluated by pre- and post-tests which consisted of 25 multiple choice and other questions. Sixty-five participants took part in the workshops. The mean knowledge of molecular biology as evaluated by the pre- and post-test assessments were 8.4 (95% CI 7.6-9.1) and 13.0 (95 CI 11.9-14.1), respectively. The mean post-test score was significantly greater than the mean pre-test score (p molecular biology workshop significantly increased the knowledge and skills of participants in molecular biology techniques. © 2017 by The International Union of Biochemistry and Molecular Biology, 2017.
Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
Verification is a process aimed at demonstrating whether a system meets it`s specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base and apply the method to verification of simple knowledge base. 8 refs., 4 figs. (Author)
Diane C. Darland
Full Text Available The primary goal of this project was to assess long-term retention of concepts and critical thinking skills in individuals who completed a Developmental Biology course. Undergraduates who had completed the course between 2006 and 2009 were recently contacted and asked to complete a professional goals survey and a multiple-choice developmental biology assessment test (DBAT targeting four levels of learning. The DBAT was designed to assess students’ retention of knowledge and skills related to factual recall, concept application, data analysis, and experimental design. Performance of the 2006–2009 cohorts was compared to that of students enrolled in 2010 who completed the DBAT at the beginning and the end of the semester. Participants from the 2010 course showed significant learning gains based on pre- and posttest scores overall and for each of the four levels of learning. No significant difference in overall performance was observed for students grouped by year from 2006–2010. Participants from the 2006–2009 cohorts scored slightly, but significantly, higher on average if they enrolled in graduate or professional training. However, performance on individual question categories revealed no significant differences between those participants with and without post-undergraduate training. Scores on exams and a primary literature critique assignment were correlated with DBAT scores and thus represent predictors of long-term retention of developmental biology knowledge and skills.
The paper presents some of the results from a recent completed Ph.D. program about disciplinarity and inter-disciplinarity in problem based learning (PBL). Disciplinary content in PBL-programs has been questioned during recent years, so stronger concepts of how knowledge is actually organized...... and structured in PBL are needed to qualify this discussion. This paper focuses on the research question: How has the structuring/organization of knowledge in curriculum changed over time and what kinds of connections and interrelations between disciplines/subjects can be identified in current PBL......-courses? The research has aimed to conceptualize how various knowledge areas blend in two educational contexts applying PBL. Interrelationships have often been referred to as inter-cross- or trans-disciplinarity. However, these terms are ambiguous. Thus I introduce the term transversality to suggests that knowledge...
Chaplin, Susan B.; Manske, Jill M.
This article describes the curriculum for a highly student-centered human biology course constructed around a series of themes that enables the integration of the same basic paradigms found in a traditional survey lecture course without sacrificing essential content. The theme-based model enhances student interest, ability to integrate knowledge,…
Andrews, Alison E.
Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.
Tokuda, T; Jaakkola, H; Yoshida, N
Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin
The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these ideas the concept of multivalued knowledge-base will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At last a possible evaluation strategy is given.
LUAN Shangmin; DAI Guozhong; LI Wei
This paper presents a programmable approach to revising knowledge bases consisting of clauses. Some theorems and lemmas are shown in order to give procedures for generating maximally consistent subsets. Then a complete procedure and an incomplete procedure for generating the maximal consistent subsets are presented, and the correctness of the procedures is also shown. Furthermore, a way to implement knowledge base revision is presented, and a prototype system is introduced. Compared with related works, the main characteristic of our approach is that the approach can be implemented by a computer program.
To solve the Imperfect Theory Problem(ITP)faced by Explanation Based Generalization(EBG),this paper proposes a methodology,Deep Knowledge Based Learning Methodology(DKBLM)by name and gives an implementation of DKBLM,called Hierarchically Distributed Learning System(HDLS).As an example of HDLS's application,this paper shows a learning system(MLS)in meteorology domain and its running with a simplified example.DKBLM can acquire experiential knowledge with causality in it.It is applicable to those kinds of domains,in which experiments are relatively difficults to caryy out,and in which there exist many available knowledge systems at different levels for the same domain(such as weather forecasting).
Sickel, Aaron J.; Friedrichsen, Patricia
Pedagogical content knowledge (PCK) has become a useful construct to examine science teacher learning. Yet, researchers conceptualize PCK development in different ways. The purpose of this longitudinal study was to use three analytic lenses to understand the development of three beginning biology teachers' PCK for teaching natural selection simulations. We observed three early-career biology teachers as they taught natural selection in their respective school contexts over two consecutive years. Data consisted of six interviews with each participant. Using the PCK model developed by Magnusson et al. (1999), we examined topic-specific PCK development utilizing three different lenses: (1) expansion of knowledge within an individual knowledge base, (2) integration of knowledge across knowledge bases, and (3) knowledge that explicitly addressed core concepts of natural selection. We found commonalities across the participants, yet each lens was also useful to understand the influence of different factors (e.g., orientation, subject matter preparation, and the idiosyncratic nature of teacher knowledge) on PCK development. This multi-angle approach provides implications for considering the quality of beginning science teachers' knowledge and future research on PCK development. We conclude with an argument that explicitly communicating lenses used to understand PCK development will help the research community compare analytic approaches and better understand the nature of science teacher learning.
Joshua A. Drew
Full Text Available Conservation biology and environmental anthropology are disciplines that are both concerned with the identification and preservation of diversity, in one case biological and in the other cultural. Both conservation biology and the study of traditional ecoloigcal knowledge function at the nexus of the social and natural worlds, yet historically there have been major impediments to integrating the two. Here we identify linguistic, cultural, and epistemological barriers between the two disciplines. We argue that the two disciplines are uniquely positioned to inform each other and to provide critical insights and new perspectives on the way these sciences are practiced. We conclude by synthesizing common themes found in conservation success stories, and by making several suggestions on integration. These include cross-disciplinary publication, expanding memberships in professional societies and conducting multidisciplinary research based on similar interests in ecological process, taxonomy, or geography. Finally, we argue that extinction threats, be they biological or cultural/linguistic are imminent, and that by bringing these disciplines together we may be able to forge synergistic conservation programs capable of protecting the vivid splendor of life on Earth.
Losko, Sascha; Heumann, Klaus
The vast quantities of information generated by academic and industrial research groups are reflected in a rapidly growing body of scientific literature and exponentially expanding resources of formalized data including experimental data from "-omics" platforms, phenotype information, and clinical data. For bioinformatics, several challenges remain: to structure this information as biological networks enabling scientists to identify relevant information; to integrate this information as specific "knowledge bases"; and to formalize this knowledge across multiple scientific domains to facilitate hypothesis generation and validation and, thus, the generation of new knowledge. Risk management in drug discovery and clinical research is used as a typical example to illustrate this approach. In this chapter we will introduce techniques and concepts (such as ontologies, semantic objects, typed relationships, contexts, graphs, and information layers) that are used to represent complex biomedical networks. The BioXM Knowledge Management Environment is used as an example to demonstrate how a domain such as oncology is represented and how this representation is utilized for research.
C. W. du Toit
Full Text Available It would appear that the epistemological tradition of the West is culminating in the present science-religion debate. The evolutionary model is being used increasingly in different disciplines as a guideline to understand humans and their action in the world. The struggle for explaining the action of God has shifted from the world of history and texts to the invisible level of quantum physics and molecular biology. It seems that levels of indeterminacy in quantum mechanics and autopoietic systems offer space to explain the action of God. On the human level integrity is sought by linking the highest level of consciousness and rationality to the very basic level of molecular and genetic structures. These issues are dealt with and specific attention is given to autopoietic systems and the biological roots of rationality.
This thesis reports on a design research project about a learning, supervising and teaching strategy to enable students in agricultural preparatory vocational secondary education (VMBO) to recognize the functionality of biological knowledge of reproduction in work placement sites. Although biologica
S.Thanga Ramya; P. Rangarajan
Large collections of publicly available video data grow day by day, the need to query this dataefficiently becomes significant. Consequently, content-based retrieval of video data turns out to be achallenging and important problem. This paper addresses the specific aspect of inferring semanticsautomatically from raw video data using different knowledge-based methods. In particular, this paperfocuses on three techniques namely, rules, Hidden Markov Models (HMMs), and Dynamic BayesianNetworks (...
Jae, M.S.; Yoo, W.S.; Park, S. S.; Choi, H.K. [Hansung Univ., Seoul (Korea)
Severe accident management can be defined as the use of existing and alternative resources, systems, and actions to prevent or mitigate a core-melt accident in nuclear power plants. TRAIN (Training pRogram for AMP In NPP), developed for training control room staff and the technical group, is introduced in this report. The TRAIN composes of phenomenological knowledge base (KB), accident sequence KB and accident management procedures with AM strategy control diagrams and information needs. This TRAIN might contribute to training them by obtaining phenomenological knowledge of severe accidents, understanding plant vulnerabilities, and solving problems under high stress. 24 refs., 76 figs., 102 tabs. (Author)
Zhu Bing; Li Jinzong; Cheng Aijun
A fast knowledge based recognition method of the harbor target in large gray remote-sensing image is presented. First, the distributed features and the inherent feature are analyzed according to the knowledge of harbor targets; then, two methods for extracting the candidate region of harbor are devised in accordance with different sizes of the harbors; after that, thresholds are used to segment the land and the sea with strategies of the segmentation error control; finally, harbor recognition is implemented according to its inherent character (semi-closed region of seawater).
Eck, van Pascal; Engelfriet, Joeri; Fensel, Dieter; Harmelen, van F.A.H.; Venema, Yde; Willems, Mark
During the last years, a number of formal specification languages for knowledge-based systems have been developed. Characteristic for knowledge-based systems are a complex knowledge base and an inference engine which uses this knowledge to solve a given problem. Specification languages for knowledge
This paper proposes an approach for functional knowledge representation based on problem reuction,which represents the organization of problem-solving activities in two levels:reduction and reasoning.The former makes the functional plans for problem-solving while the latter constructs functional units, called handlers,for executing subproblems designated by these plans.This approach emphasizes that the representation of domain knowledge should be closely combined with(rather than separated from)its use,therefore provides a set of reasoning-level primitives to construct handlers and formulate the control strategies for executing them,As reduction-level primitives,handlers are used to construct handler-associative networks,which become the executable representation of problem-reduction graphs,in order to realize the problem-solving methods suited to domain features.Besides,handlers and their control slots can be used to focus the attention of knowledge acquisition and reasoning control.
Jain, Lakhmi; Watada, Junzo; Howlett, Robert
This book contains innovative research from leading researchers who presented their work at the 17th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2013, held in Kitakyusha, Japan, in September 2013. The conference provided a competitive field of 236 contributors, from which 38 authors expanded their contributions and only 21 published. A plethora of techniques and innovative applications are represented within this volume. The chapters are organized using four themes. These topics include: data mining, knowledge management, advanced information processes and system modelling applications. Each topic contains multiple contributions and many offer case studies or innovative examples. Anyone that wants to work with information repositories or process knowledge should consider reading one or more chapters focused on their technique of choice. They may also benefit from reading other chapters to assess if an alternative technique represents a more suitable app...
Kamdar, Maulik R; Dumontier, Michel
Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard.
Morgan, T J H; Laland, K N
Humans are characterized by an extreme dependence on culturally transmitted information and recent formal theory predicts that natural selection should favor adaptive learning strategies that facilitate effective copying and decision making. One strategy that has attracted particular attention is conformist transmission, defined as the disproportionately likely adoption of the most common variant. Conformity has historically been emphasized as significant in the social psychology literature, and recently there have also been reports of conformist behavior in non-human animals. However, mathematical analyses differ in how important and widespread they expect conformity to be, and relevant experimental work is scarce, and generates findings that are both mutually contradictory and inconsistent with the predictions of the models. We review the relevant literature considering the causation, function, history, and ontogeny of conformity, and describe a computer-based experiment on human subjects that we carried out in order to resolve ambiguities. We found that only when many demonstrators were available and subjects were uncertain was subject behavior conformist. A further analysis found that the underlying response to social information alone was generally conformist. Thus, our data are consistent with a conformist use of social information, but as subjects' behavior is the result of both social and asocial influences, the resultant behavior may not be conformist. We end by relating these findings to an embryonic cognitive neuroscience literature that has recently begun to explore the neural bases of social learning. Here conformist transmission may be a particularly useful case study, not only because there are well-defined and tractable opportunities to characterize the biological underpinnings of this form of social learning, but also because early findings imply that humans may possess specific cognitive adaptations for effective social learning.
Thomas Joshau Henry Morgan
Full Text Available Humans are characterized by an extreme dependence on culturally transmitted information and recent formal theory predicts that natural selection should favour adaptive learning strategies that facilitate effective use of social information in decision making. One strategy that has attracted particular attention is conformist transmission, defined as the disproportionately likely adoption of the most common variant. Conformity has historically been emphasized as significant in the social psychology literature, and recently there have also been reports of conformist behaviour in nonhuman animals. However, mathematical analyses differ in how important and widespread they expect conformity to be, and relevant experimental work is scarce, and generates findings that are both mutually contradictory and inconsistent with the predictions of the models. We review the relevant literature considering the causation, function, history and ontogeny of conformity and describe a computer-based experiment on human subjects that we carried out in order to resolve ambiguities. We found that only when many demonstrators were available and subjects were uncertain was subject behaviour conformist. A further analysis found that the underlying response to social information alone was generally conformist. Thus, our data are consistent with a conformist use of social information, but as subject’s behaviour is the result of both social and asocial influences, the resultant behaviour may not be conformist. We end by relating these findings to an embryonic cognitive neuroscience literature that has recently begun to explore the neural bases of social learning. Here conformist transmission may be a particularly useful case study, not only because there are well-defined and tractable opportunities to characterize the biological underpinnings of this form of social learning, but also because early findings imply that humans may possess specific cognitive adaptations for
Raad, Nawal Abou; Chatila, Hanadi
This paper investigates Lebanese grade 7 biology teachers' mathematical knowledge and skills, by exploring how they explain a visual representation in an activity depending on the mathematical concept "Function". Twenty Lebanese in-service biology teachers participated in the study, and were interviewed about their explanation for the…
Luckie, Douglas B.; Rivkin, Aaron M.; Aubry, Jacob R.; Marengo, Benjamin J.; Creech, Leah R.; Sweeder, Ryan D.
We studied gains in student learning over eight semesters in which an introductory biology course curriculum was changed to include optional verbal final exams (VFs). Students could opt to demonstrate their mastery of course material via structured oral exams with the professor. In a quantitative assessment of cell biology content knowledge,…
Southard, Katelyn; Wince, Tyler; Meddleton, Shanice; Bolger, Molly S.
Research has suggested that teaching and learning in molecular and cellular biology (MCB) is difficult. We used a new lens to understand undergraduate reasoning about molecular mechanisms: the knowledge-integration approach to conceptual change. Knowledge integration is the dynamic process by which learners acquire new ideas, develop connections…
Minor, Jody L.; Kauffman, William J. (Technical Monitor)
Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.
James, Mark; Mackey, Ryan; Tikidjian, Raffi
The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.
How do we acquire our knowledge about psychiatric disorders and how did the current biologically way of thinking in psychiatry originate? With the help of the philosophy of Michel Foucault and Nikolas Rose this essay describes the conditions that made possible today's biological approach in psychiatry. It will become clear that research in the life sciences and the psychiatric knowledge arising from this research are shaped and formed in a complex network of social, economic, political and scientific forces. The biological approach to psychiatric disorders is the product of present-day relationships between scientific developments and commercial corporations.
Full Text Available For sustainable competitive advantages gain, modern organizations, knowledge-based, must promote a proactive and flexible management, permanently connected to change which occur in business environment. Contextually, the paper analyses impact factors of the environment which could determine a firm to initiate a programme strategic organizational change. Likewise, the paper identifies the main organizational variables involved in a changing process and emphasizes the essential role which managers and entrepreneurs have in substantiation, elaboration and implementation of organizational change models.
Ali Akbar ASADI-POOYA
Full Text Available How to Cite this Article: Asadi-Pooya AA, Torabi-Nami M. Knowledge and Attitude Towards Epilepsy Among Biology Teachers in Fars Province, Iran. IranianJournal of Child Neurology 2012;6(1:13-18.ObjectiveThis study investigates the awareness and perception on “epilepsy” amongst biology teachers in Fars province, Iran.Materials & MethodsA sample of high school biology teachers in Fars province, Iran, filled out an investigator designed questionnaire including questions about their knowledge and attitude concerning “epilepsy”. There were 17 questions in the questionnaire. Nine questions addressed the knowledge and the rest were about attitude and perception.ResultsForty two teachers completed the questionnaires. More than two-thirds of the participants had a fairly desirable awareness about the definition; whereas, only approximately 40% knew something about the etiology and treatment of epilepsy. More than two-thirds of the participants had a positive attitude towards epilepsy; however, misconceptions and negative attitudes were observed.ConclusionEducational programs for biology teachers and also other teachers are necessary to improve their knowledge, attitude and perception about epilepsy.References Sander JW, Shorvon SD. Incidence and prevalence studies in epilepsy and their methodological problems: a review. J Neurol Neurosurg Psychiatry 1987;50:829-39. Saraceno B. The WHO world health report 2001 on mental health. Epidemiol Psychiatr Soc 2002;11(2:83-7. Kim MK, Cho KH, Shin J, Kim SJ. A study of public attitudes towards epilepsy in Kwang-Juarea. J Kor Neurol Assoc 1994;12:410-27. DiIorio C, Shafer PO, Letz R, Henry T, Schomer DL, Yeager K, etal. The association of stigma with self-management and perception of health care among adults with epilepsy. Epilepsy Behav 2003;4(3:259-67. Aziz H, Akhtar SW, Hasan KZ. Epilepsy in Pakistan: stigma and psychological problems: a population-based epidemiologic study. Epilepsia 1997
Full Text Available In present paper we have endeavoured to tell about some reasonings, conclusions and pricticals results, to which we have come being busy with one of most interesting problems of modern science. This paper is a brief report of the group of scientists from the Laboratory of Artificial Intelligence Systems about their experience of work in the field of knowledge engineering. The researches in this area was started in our Laboratory more than 10 years ago, i.e. about in the moment, when there was just another rise in Artificial Intelligence, caused by mass emerging of expert systems. The tasks of knowledge engineering were being varied, and focal point of our researches was being varied too. Certainly, we have not solved all the problems, originating in this area. Our knowledge still has an approximate nature, but nevertheless, the outcomes obtained by us seem rather important and interesting. So, we want to tell about our experience in building of knowledge-based systems, and expert systems, in particular.
LU RuQian; JIN Zhi
The first part of this paper reviews our efforts on knowledge-based software engi-neering, namely PROMIS, started from 1990s. The key point of PROMIS is to gen-erate applications automatically based on domain knowledge as well as software knowledge. That is featured by separating the development of domain knowledge from the development of software. But in PROMIS, we did not find an appropriate representation for the domain knowledge. Fortunately, in our recent work, we found such a carrier for knowledge modules, i.e. knowware. Knowware is a commercial-ized form of domain knowledge. This paper briefly introduces the basic definitions of knowware, knowledge middleware and knowware engineering. Three life circle models of knowware engineering and the design of corresponding knowware im-plementations are given. Finally we discuss application system automatic genera-tion and domain knowledge modeling on the J2EE platform, which combines the techniques of PROMIS, knowware and J2EE, and the development and deployment framework, i.e.PROMIS/KW**.
Full Text Available Service failure and recovery is a well-established area of services research. Research has shown that service recovery is critically important from a managerial perspective in terms of maintaining customer relationships. Yet few firms excel at handling service failures. There is a growing number of managers who claim that customers tend to be dissatisfied with their service recovery effort. Their employees cannot improve service processes when they experience recovery situations and their companies still do not learn from service failure.  attribute the service recovery ineffectiveness to the competing interests of managing employees, customers and processes. We agree with their contention that to address these criticisms, complaint management must acknowledge and find new approaches to achieve consistency and to correct the misalignment of interests that can exist between the actions of the organisation and the needs of its customers and employees. We believe that search in the customer knowledge management literature represents one effective means to enhance a firm ability to implement a cohesive service recovery strategy.A comprehensive based knowledge creation system framework where the Socialization, Externalization, Combination and Internalization (SECI modes, and various ‘ba’ proposed by Nonaka and Konno are introduced for complaint management. Empirical research, involving a case study is presented to illustrate the proposed framework. This framework is believed to pave the way for e-knowledge based complaint management.
Jagarlapudi, Sarma A R P; Kishan, K V Radha
Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.
Yu, Zhiwen; Wongb, Hau-San; You, Jane; Yang, Qinmin; Liao, Hongying
The adoption of microarray techniques in biological and medical research provides a new way for cancer diagnosis and treatment. In order to perform successful diagnosis and treatment of cancer, discovering and classifying cancer types correctly is essential. Class discovery is one of the most important tasks in cancer classification using biomolecular data. Most of the existing works adopt single clustering algorithms to perform class discovery from biomolecular data. However, single clustering algorithms have limitations, which include a lack of robustness, stability, and accuracy. In this paper, we propose a new cluster ensemble approach called knowledge based cluster ensemble (KCE) which incorporates the prior knowledge of the data sets into the cluster ensemble framework. Specifically, KCE represents the prior knowledge of a data set in the form of pairwise constraints. Then, the spectral clustering algorithm (SC) is adopted to generate a set of clustering solutions. Next, KCE transforms pairwise constraints into confidence factors for these clustering solutions. After that, a consensus matrix is constructed by considering all the clustering solutions and their corresponding confidence factors. The final clustering result is obtained by partitioning the consensus matrix. Comparison with single clustering algorithms and conventional cluster ensemble approaches, knowledge based cluster ensemble approaches are more robust, stable and accurate. The experiments on cancer data sets show that: 1) KCE works well on these data sets; 2) KCE not only outperforms most of the state-of-the-art single clustering algorithms, but also outperforms most of the state-of-the-art cluster ensemble approaches.
SU Hai; JIANG Zuhua
Due to the increasing amount and complexity of knowledge in product design, the knowledge map based on design process is presented as a tool to reuse product design process, promote the product design knowledge sharing. The relationship between design task flow and knowledge flow is discussed; A knowledge organizing method based on design task decomposition and a visualization method to support the knowledge retrieving and sharing in product design are proposed. And a knowledge map system to manage the knowledge in product design process is built with Visual C++ and SVG. Finally, a brief case study is provided to illustrate the construction and application of knowledge map in fuel pump design.
Scott, Bobby R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Lin, Yong [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Wilder, Julie [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Belinsky, Steven [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States)
Our main research objective was to determine the biological bases for low-dose, radiation-induced adaptive responses in the lung and use the knowledge gained to produce an improved risk model for radiation-induced lung cancer that accounts for activated natural protection, genetic influences, and the role of epigenetic regulation (epiregulation). Currently, low-dose radiation risk assessment is based on the linear-no-threshold hypothesis which now is known to be unsupported by a large volume of data.
Christopher Y Park
Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics
Tilchin, Oleg; Kittany, Mohamed
The goal of an approach to Adaptive Knowledge Management (AKM) of project-based learning (PBL) is to intensify subject study through guiding, inducing, and facilitating development knowledge, accountability skills, and collaborative skills of students. Knowledge development is attained by knowledge acquisition, knowledge sharing, and knowledge…
Knowledge classification based on new product development procedure The new product development procedure requires that each stage has necessary activity and output result,generally files,in fact these output results are coded product knowledge produced in each stage,also including product knowledge and process knowledge without coding,for instance the knowledge and experiences of project management,etc.
TANG Ming-hao; MA Jiang-li; ZHANG Chi; ZHANG Zu-fang; FANG Xiao-wei
Virtual sewing is one of the key techniques in the realization of 3D computer aided garment design. After aualysis and comparison, this article brings forward a method of virtual sewing based on draping knowledge. Two new concepts are introduced. One is the conversion of garment pattern pieces' attributes and the other is the "clothes shell"--a middleware to simplify the mapping process. Meanwhile, the method implements the mapping process and completes virtual sewing by setting anchor points and lines on the virtual model. The method has been applied to a simulation system and proved to be successful.
Maisseu, A. [WONUC, 49, rue Lauriston, 75116 Paris (France)]. E-mail: email@example.com
Conventional economic analyses are based on dogmas that are frequently out of phase with reality. It is thus very difficult to introduce technology, technological innovation and technical progress correctly into all the theoretical constructions derived from the application of these principles. The same applies to the consideration of pollution and the depletion of natural resources. These difficult problems, which bedevil the definition of a sustainable development, find no satisfactory answer in these theoretical considerations. The consideration of Gestalteconomy helps to resolve these difficulties, in opening the door to the entrepreneurial practical management of knowledge. (author)
Jairam, B.N.; Agarwal, A.; Emrich, M.L.
Recent trends in software engineering research focus on the incorporation of AI techniques. The feasibility of an overlap between AI and software engineering is examined. The benefits of merging the two fields are highlighted. The long-term goal is to automate the software development process. Some projects being undertaken towards the attainment of this goal are presented as examples. Finally, research on the Oak Ridge Reservation aimed at developing a knowledge-based software project management aid is presented. 25 refs., 1 tab.
Full Text Available Maintenance decision making becomes more and more a management concern. Some decades ago, maintenance was still often considered as an unavoidable side effect of production. The perception of maintenance has evolved considerably. One of the current issues is the maintenance concept, being the mix of maintenance interventions and the general framework for determining this mix. In this paper we describe a modular framework, called Knowledge Based Maintenance, for developing a customised maintenance concept. After describing the general framework and its decision support use, some case experiences are given. This experience covers some elements of the proposed framework.
Bonev, Martin; Hvam, Lars
a considerably high amount of their recourses is required for designing and specifying the majority of their product assortment. As design decisions are hereby based on knowledge and experience about behaviour and applicability of construction techniques and materials for a predefined design situation, smart...... tools need to be developed, to support these activities. In order to achieve a higher degree of design automation, this study proposes a framework for using configuration systems within the CAD environment together with suitable geometric modeling techniques on the example of a Danish manufacturer...
Ali Akbar ASADI-POOYA
Full Text Available ObjectiveThis study investigates the awareness and perception on “epilepsy” amongst biology teachers in Fars province, Iran.Materials & MethodsA sample of high school biology teachers in Fars province, Iran, filled out an investigator designed questionnaire including questions about their knowledge and attitude concerning “epilepsy”. There were 17 questions in the questionnaire. Nine questions addressed the knowledge and the rest were about attitude and perception.ResultsForty two teachers completed the questionnaires. More than two-thirds of the participants had a fairly desirable awareness about the definition; whereas, only approximately 40% knew something about the etiology and treatment of epilepsy. More than two-thirds of the participants had a positive attitude towards epilepsy; however, misconceptions and negative attitudes were observed.ConclusionEducational programs for biology teachers and also other teachers are necessary to improve their knowledge, attitude and perception about epilepsy.
Full Text Available Background: The discipline of health or medical informatics is relatively new in that the literature has existed for only 40 years. The British Computer Society (BCS health group was of the opinion that work should be undertaken to explore the scope of medical or health informatics. Once the mapping work was completed the International Medical Informatics Association (IMIA expressed the wish to develop it further to define the knowledge base of the discipline and produce a comprehensive internationally applicable framework. This article will also highlight the move from the expert opinion of a small group to the analysis of publications to generalise and refine the initial findings, and illustrate the importance of triangulation.Objectives: The aim of the project was to explore the theoretical constructs underpinning the discipline of health informatics and produce a cognitive map of the existing understanding of the discipline and develop the knowledge base of health informatics for the IMIA and the BCS.Method: The five-phase project, described in this article, undertaken to define the discipline of health informatics used four forms of triangulation.Results: The output from the project is a framework giving the 14 major headings (Subjects and 245 elements, which together describe the current perception of the discipline of health informatics.Conclusion: This article describes how each phase of the project was strengthened, through using triangulation within and between the different phases. This was done to ensure that the investigators could be confident in the confirmation and completeness of data, and assured of the validity and reliability of the final output of the ‘IMIA Knowledge Base’ that was endorsed by the IMIA Board in November 2009.
Schwendimann, Beat Adrian
-specific form of concept map, called Knowledge Integration Map (KIM), which aims to help learners connect ideas across levels (for example, genotype and phenotype levels) towards an integrated understanding of evolution. Using a design-based research approach (Brown, 1992; Cobb et al., 2003), three iterative studies were implemented in ethically and economically diverse public high schools classrooms using the web-based inquiry science environment (WISE) (Linn et al., 2003; Linn et al., 2004). Study 1 investigates concept maps as generative assessment tools. Study 1A compares the concept map generation and critique process of biology novices and experts. Findings suggest that concept maps are sensitive to different levels of knowledge integration but require scaffolding and revision. Study 1B investigates the implementation of concept maps as summative assessment tools in a WISE evolution module. Results indicate that concept maps can reveal connections between students' alternative ideas of evolution. Study 2 introduces KIMs as embedded collaborative learning tools. After generating KIMs, student dyads revise KIMs through two different critique activities (comparison against an expert or peer generated KIM). Findings indicate that different critique activities can promote the use of different criteria for critique. Results suggest that the combination of generating and critiquing KIMs can support integrating evolution ideas but can be time-consuming. As time in biology classrooms is limited, study 3 distinguishes the learning effects from either generating or critiquing KIMs as more time efficient embedded learning tools. Findings suggest that critiquing KIMs can be more time efficient than generating KIMs. Using KIMs that include common alternative ideas for critique activities can create genuine opportunities for students to critically reflect on new and existing ideas. Critiquing KIMs can encourage knowledge integration by fostering self-monitoring of students' learning
Schwendimann, Beat Adrian
Many students leave school with a fragmented understanding of biology that does not allow them to connect their ideas to their everyday lives (Wandersee, 1989; Mintzes, Wandersee, & Novak, 1998; Mintzes, Wandersee, & Novak, 2000a). Understanding evolution ideas is seen as central to building an integrated knowledge of biology (Blackwell, Powell, & Dukes, 2003; Thagard & Findlay, 2010). However, the theory of evolution has been found difficult to understand as it incorporates a wide range of i...
Guixia Liu; Lei Liu; Chunyu Liu; Ming Zheng; Lanying Su; Chunguang Zhou
Inferring gene regulatory networks from large-scale expression data is an important topic in both cellular systems and computational biology. The inference of regulators might be the core factor for understanding actual regulatory conditions in gene regulatory networks, especially when strong regulators do work significantly, in this paper, we propose a novel approach based on combining neuro-fuzzy network models with biological knowledge to infer strong regulators and interrelated fuzzy rules. The hybrid neuro-fuzzy architecture can not only infer the fuzzy rules, which are suitable for describing the regulatory conditions in regulatory networks, but also explain the meaning of nodes and weight value in the neural network. It can get useful rules automatically without factitious judgments. At the same time, it does not add recursive layers to the model, and the model can also strengthen the relationships among genes and reduce calculation. We use the proposed approach to reconstruct a partial gene regulatory network of yeast. The results show that this approach can work effectively.
Aubert, Alice H.; Thrun, Michael C.; Breuer, Lutz; Ultsch, Alfred
High-frequency, in-situ monitoring provides large environmental datasets. These datasets will likely bring new insights in landscape functioning and process scale understanding. However, tailoring data analysis methods is necessary. Here, we detach our analysis from the usual temporal analysis performed in hydrology to determine if it is possible to infer general rules regarding hydrochemistry from available large datasets. We combined a 2-year in-stream nitrate concentration time series (time resolution of 15 min) with concurrent hydrological, meteorological and soil moisture data. We removed the low-frequency variations through low-pass filtering, which suppressed seasonality. We then analyzed the high-frequency variability component using Pareto Density Estimation, which to our knowledge has not been applied to hydrology. The resulting distribution of nitrate concentrations revealed three normally distributed modes: low, medium and high. Studying the environmental conditions for each mode revealed the main control of nitrate concentration: the saturation state of the riparian zone. We found low nitrate concentrations under conditions of hydrological connectivity and dominant denitrifying biological processes, and we found high nitrate concentrations under hydrological recession conditions and dominant nitrifying biological processes. These results generalize our understanding of hydro-biogeochemical nitrate flux controls and bring useful information to the development of nitrogen process-based models at the landscape scale. PMID:27572284
Aubert, Alice H; Thrun, Michael C; Breuer, Lutz; Ultsch, Alfred
High-frequency, in-situ monitoring provides large environmental datasets. These datasets will likely bring new insights in landscape functioning and process scale understanding. However, tailoring data analysis methods is necessary. Here, we detach our analysis from the usual temporal analysis performed in hydrology to determine if it is possible to infer general rules regarding hydrochemistry from available large datasets. We combined a 2-year in-stream nitrate concentration time series (time resolution of 15 min) with concurrent hydrological, meteorological and soil moisture data. We removed the low-frequency variations through low-pass filtering, which suppressed seasonality. We then analyzed the high-frequency variability component using Pareto Density Estimation, which to our knowledge has not been applied to hydrology. The resulting distribution of nitrate concentrations revealed three normally distributed modes: low, medium and high. Studying the environmental conditions for each mode revealed the main control of nitrate concentration: the saturation state of the riparian zone. We found low nitrate concentrations under conditions of hydrological connectivity and dominant denitrifying biological processes, and we found high nitrate concentrations under hydrological recession conditions and dominant nitrifying biological processes. These results generalize our understanding of hydro-biogeochemical nitrate flux controls and bring useful information to the development of nitrogen process-based models at the landscape scale.
Kelsey, R.L. [Los Alamos National Lab., NM (United States)|New Mexico State Univ., Las Cruces, NM (United States); Hartley, R.T. [New Mexico State Univ., Las Cruces, NM (United States); Webster, R.B. [Los Alamos National Lab., NM (United States)
An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.
Full Text Available Based on the analysis of the relationship between the process of product innovation design and knowledge, this article proposes a theoretical model of quality function knowledge deployment. In order to link up the product innovation design and the knowledge required by the designer, the iterative method of quality function knowledge deployment is refined, as well as the knowledge retrieval model and knowledge support model based on quality function knowledge deployment are established. In the whole life cycle of product design, in view of the different requirements for knowledge in conceptual design stage, components’ configuration stage, process planning stage, and production planning stage, the quality function knowledge deployment model could link up the required knowledge with the engineering characteristics, component characteristics, process characteristics, and production characteristics in the four stages using the mapping relationship between the function characteristics and the knowledge and help the designer to track the required knowledge for realizing product innovation design. In this article, an instance about rewinding machine is given to demonstrate the practicability and validity of product innovation design knowledge support technology based on quality function knowledge deployment.
This work was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed, and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - the Intellipse system. The potential role of KBS tools in the domain of data-base design is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified.
Pemsel, Sofia; Wiewiora, Anna; Müller, Ralf
a definition of KG in PBOs, a conceptual framework to investigate KG and a methodological framework for empirical inquiry into KG in PBO settings. Our definition highlights the contingent nature of KG processes in relation to their organizational context. The conceptual framework addresses macro- and micro......This paper conceptualizes knowledge governance (KG) in project-based organizations (PBOs) and its methodological approaches for empirical investigation. Three key contributions towards a multi-faceted view of KG and an understanding of KG in PBOs are advanced. These contributions include......-level elements of KG and their interaction. The methodological framework proposes five different research approaches, structured by differentiation and integration of various ontological and epistemological stances. Together these contributions provide a novel platform for understanding KG in PBOs and developing...
The equation on Boltsmann's tomb is S = K log W, giving 137 = 10E60 where 10E60 closely stands for the age of the universe in Plank times. We wish we could add ``137 = 10E60'' on his tomb as a contribution leading physics towards information in biology as explained in our book ``Quantum Consciousness - the Road to Reality.'' (1) We draft our speculation that such a step may explain the underlying physical cause for mutations. Tiny immeasurable and slow changes well beyond the tenth digit of fine structure constant may suffice to change the information system in constituent particles of nucleotides with their external effects forcing changes in the genetic code with successful changes resulting into mutations. (2) Our quantum mechanical published derivation of the strong coupling implies gravity as a cumulative effect of quantum mechanical particles further implying that the universal constant of gravity (G) can not be constant everywhere. (1) and (2) put together should remove Darwin's confusion about the constancy of gravity. Moving planets and Sunstorms should also cause changes in G on earth unnoticeable to mankind, but large enough to have an impact on the internal particles of nucleotides which should implicitly have an external effect on the genetic code per our theory.
Mainardi, Joseph D.; Szatkowski, G. P.
This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.
This study investigated pre-service science teachers' pedagogical content knowledge in the physics, chemistry, and biology topics. These topics were the light and sound, the physical and chemical changes, and reproduction, growth, and evolution. Qualitative research design was utilized. Data were collected from 33 pre-service science teachers…
Full Text Available BACKGROUND: Candidate gene prioritization aims to identify promising new genes associated with a disease or a biological process from a larger set of candidate genes. In recent years, network-based methods - which utilize a knowledge network derived from biological knowledge - have been utilized for gene prioritization. Biological knowledge can be encoded either through the network's links or nodes. Current network-based methods can only encode knowledge through links. This paper describes a new network-based method that can encode knowledge in links as well as in nodes. RESULTS: We developed a new network inference algorithm called the Knowledge Network Gene Prioritization (KNGP algorithm which can incorporate both link and node knowledge. The performance of the KNGP algorithm was evaluated on both synthetic networks and on networks incorporating biological knowledge. The results showed that the combination of link knowledge and node knowledge provided a significant benefit across 19 experimental diseases over using link knowledge alone or node knowledge alone. CONCLUSIONS: The KNGP algorithm provides an advance over current network-based algorithms, because the algorithm can encode both link and node knowledge. We hope the algorithm will aid researchers with gene prioritization.
Full Text Available A general framework of hydraulic fault diagnosis system was studied. It consisted of equipment knowledge bases, real-time databases, fusion reasoning module, knowledge acquisition module and so on. A tree-structure model of fault knowledge was established. Fault nodes knowledge was encapsulated by object-oriented technique. Complete knowledge bases were made including fault bases and diagnosis bases. It could describe the fault positions, the structure of fault, cause-symptom relationships, diagnosis principles and other knowledge. Taking the fault of left and right lifting oil cylinder out of sync for example, the diagnostic results show that the methods were effective.
This paper aims to present the knowledge based economy as a pillar of the knowledge society, due to the fact that in the past decades there has been a series of transitions of the global economy from the development based on traditional factors to a knowledge based economy, in which intangible goods are of vital importance.
Conner, Lindsey; Gunstone, Richard
This paper reports on a qualitative case study investigation of the knowledge and use of learning strategies by 16 students in a final year high school biology class to expand their conscious knowledge of learning. Students were provided with opportunities to engage in purposeful inquiry into the biological, social and ethical aspects of cancer. A constructivist approach was implemented to access prior content and procedural knowledge in various ways. Students were encouraged to develop evaluation of their learning skills independently through activities that promoted metacognition. Those students who planned and monitored their work produced essays of higher quality. The value and difficulties of promoting metacognitive approaches in this context are discussed, as well as the idea that metacognitive processes are difficult to research, because they have to be conscious in order to be identified by the learner, thereby making them accessible to the researcher.
邹湘军; 孙健; 何汉武
Following researches on the knowledge-based product design, product modeling based on knowledge fusion is studied in a virtual environment. Knowledge fusion is the energy sources of product innovation designs. Because a knowledge representation method is the main content of knowledge fusion, production rule way, semantic network, predicate, object-oriented and case-based representations are discussed. Using agents with object-oriented method, the knowledge can be represented as a set. The product knowledge set is divided into two subset: text knowledge and knowledge of engineering graphics that is a different form. Manipulation of the subset knowledge and fusion method is described. The paper also describes a six-tuple function in an agent data structure. A virtual environment computation model is proposed, and a practical example given.
Turkan, Sultan; De Oliveira, Luciana C.; Lee, Okhee; Phelps, Geoffrey
Background/Context: The current research on teacher knowledge and teacher accountability falls short on information about what teacher knowledge base could guide preparation and accountability of the mainstream teachers for meeting the academic needs of ELLs. Most recently, research on specialized knowledge for teaching has offered ways to…
ZHOU BIN; WANG RENCHAO
A machine-learning approach was developed for automated building of knowledge bases for soil resourcesmapping by using a classification tree to generate knowledge from training data. With this method, buildinga knowledge base for automated soil mapping was easier than using the conventional knowledge acquisitionapproach. The knowledge base built by classification tree was used by the knowledge classifier to perform thesoil type classification of Longyou County, Zhejiang Province, China using Landsat TM bi-temporal imagesand GIS data. To evaluate the performance of the resultant knowledge bases, the classification results werecompared to existing soil map based on a field survey. The accuracy assessment and analysis of the resultantsoil maps suggested that the knowledge bases built by the machine-learning method was of good quality formapping distribution model of soil classes over the study area.
Maria De Giusti
Full Text Available BACKGROUND: A cross-sectional survey on knowledge and perception of occupational biological risk among workers in several occupations was carried out in the industrial area of Rome. METHODS: The study was carried out in the period of March-April 2010 using a questionnaire with 33 items on the following areas: a socio-demographic data; b perception of the biological risks in ordinary occupational activity; c knowledge about biological risks; d biological risks in the working environment. The questionnaire was submitted to a convenience sample of workers of an industrial area in Southern Rome. RESULTS: 729 participants entered the study from the following work activities: food, catering, service, farming and breeding, healthcare, school and research (males 57.2%; mean age 37.4 years, SD = 10.9. Significant associations were found between different activity areas with respect to the relevance of the biological risk (p = 0.044 and the perception of the biological risk (p < 0.001. With respect to vehicles of infectious agents, the highest percentages of the most common biological risk exposures were: air and physical contact for the catering and food group, 66.7% and 61.90% respectively; air and blood for the health and research group, with 73.50% and 57.00% respectively; and physical contact and blood for the service group, 63.10 % and 48.30%. Significant difference of proportions were found about the prevalent effect caused by the biological agents was the occurrence of infectious diseases (59.90% food group, 91.60% health and research and 79.30% service group (p < 0.001. The perception of knowledge resulted in a good rank (sufficient, many or complete in the food and catering group, 78.3% with significant difference compared to other professions (p < 0.001. CONCLUSIONS: All participants show good knowledge the effects induced by biological agents and it is significant that almost half of the respondents are aware of the risks concerning allergies
Elffers, J.; Konijnenberg, D.; Walraven, E.M.P.; Spaan, M.T.J.
Several approaches exist to solve Artificial Intelligence planning problems, but little attention has been given to the combination of using landmark knowledge and satisfiability (SAT). Landmark knowledge has been exploited successfully in the heuristics of classical planning. Recently it was also s
Perrone-Capano, Carla; Volpicelli, Floriana; di Porzio, Umberto
Music is a universal language, present in all human societies. It pervades the lives of most human beings and can recall memories and feelings of the past, can exert positive effects on our mood, can be strongly evocative and ignite intense emotions, and can establish or strengthen social bonds. In this review, we summarize the research and recent progress on the origins and neural substrates of human musicality as well as the changes in brain plasticity elicited by listening or performing music. Indeed, music improves performance in a number of cognitive tasks and may have beneficial effects on diseased brains. The emerging picture begins to unravel how and why particular brain circuits are affected by music. Numerous studies show that music affects emotions and mood, as it is strongly associated with the brain's reward system. We can therefore assume that an in-depth study of the relationship between music and the brain may help to shed light on how the mind works and how the emotions arise and may improve the methods of music-based rehabilitation for people with neurological disorders. However, many facets of the mind-music connection still remain to be explored and enlightened.
Meng, Jun; Zhang, Jing; Luan, Yushi
Mining knowledge from gene expression data is a hot research topic and direction of bioinformatics. Gene selection and sample classification are significant research trends, due to the large amount of genes and small size of samples in gene expression data. Rough set theory has been successfully applied to gene selection, as it can select attributes without redundancy. To improve the interpretability of the selected genes, some researchers introduced biological knowledge. In this paper, we first employ neighborhood system to deal directly with the new information table formed by integrating gene expression data with biological knowledge, which can simultaneously present the information in multiple perspectives and do not weaken the information of individual gene for selection and classification. Then, we give a novel framework for gene selection and propose a significant gene selection method based on this framework by employing reduction algorithm in rough set theory. The proposed method is applied to the analysis of plant stress response. Experimental results on three data sets show that the proposed method is effective, as it can select significant gene subsets without redundancy and achieve high classification accuracy. Biological analysis for the results shows that the interpretability is well.
Southard, Katelyn; Wince, Tyler; Meddleton, Shanice; Bolger, Molly S.
Research has suggested that teaching and learning in molecular and cellular biology (MCB) is difficult. We used a new lens to understand undergraduate reasoning about molecular mechanisms: the knowledge-integration approach to conceptual change. Knowledge integration is the dynamic process by which learners acquire new ideas, develop connections between ideas, and reorganize and restructure prior knowledge. Semistructured, clinical think-aloud interviews were conducted with introductory and upper-division MCB students. Interviews included a written conceptual assessment, a concept-mapping activity, and an opportunity to explain the biomechanisms of DNA replication, transcription, and translation. Student reasoning patterns were explored through mixed-method analyses. Results suggested that students must sort mechanistic entities into appropriate mental categories that reflect the nature of MCB mechanisms and that conflation between these categories is common. We also showed how connections between molecular mechanisms and their biological roles are part of building an integrated knowledge network as students develop expertise. We observed differences in the nature of connections between ideas related to different forms of reasoning. Finally, we provide a tentative model for MCB knowledge integration and suggest its implications for undergraduate learning. PMID:26931398
Southard, Katelyn; Wince, Tyler; Meddleton, Shanice; Bolger, Molly S
Research has suggested that teaching and learning in molecular and cellular biology (MCB) is difficult. We used a new lens to understand undergraduate reasoning about molecular mechanisms: the knowledge-integration approach to conceptual change. Knowledge integration is the dynamic process by which learners acquire new ideas, develop connections between ideas, and reorganize and restructure prior knowledge. Semistructured, clinical think-aloud interviews were conducted with introductory and upper-division MCB students. Interviews included a written conceptual assessment, a concept-mapping activity, and an opportunity to explain the biomechanisms of DNA replication, transcription, and translation. Student reasoning patterns were explored through mixed-method analyses. Results suggested that students must sort mechanistic entities into appropriate mental categories that reflect the nature of MCB mechanisms and that conflation between these categories is common. We also showed how connections between molecular mechanisms and their biological roles are part of building an integrated knowledge network as students develop expertise. We observed differences in the nature of connections between ideas related to different forms of reasoning. Finally, we provide a tentative model for MCB knowledge integration and suggest its implications for undergraduate learning.
WU ZhaoHui(吴朝晖); CHEN HuaJun(陈华钧); XU JieFeng(徐杰锋)
The emergence of semantic web will result in an enormous amount of knowledge base resources on the web. In this paper, a generic Knowledge Base Grid Architecture (KB-Grid)for building large-scale knowledge systems on the semantic web is presented. KB-Grid suggests a paradigm that emphasizes how to organize, discover, utilize, and manage web knowledge base resources. Four principal components are under development: a semantic browser for retrieving and browsing semantically enriched information, a knowledge server acting as the web container for knowledge, an ontology server for managing web ontologies, and a knowledge base directory server acting as the registry and catalog of KBs. Also a referential model of knowledge service and the mechanisms required for semantic communication within KB-Grid are defined. To verify the design rationale underlying the KB-Grid, an implementation of Traditional Chinese Medicine(TCM) is described.
Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice
Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.
Sureephong, Pradorn; Ouzrout, Yacine; Bouras, Abdelaziz
Knowledge-based economy forces companies in the nation to group together as a cluster in order to maintain their competitiveness in the world market. The cluster development relies on two key success factors which are knowledge sharing and collaboration between the actors in the cluster. Thus, our study tries to propose knowledge management system to support knowledge management activities within the cluster. To achieve the objectives of this study, ontology takes a very important role in knowledge management process in various ways; such as building reusable and faster knowledge-bases, better way for representing the knowledge explicitly. However, creating and representing ontology create difficulties to organization due to the ambiguity and unstructured of source of knowledge. Therefore, the objectives of this paper are to propose the methodology to create and represent ontology for the organization development by using knowledge engineering approach. The handicraft cluster in Thailand is used as a case stu...
CHEN Lei; LI Dehua; LI Xiaojian; WU Chunxiang
Resources are the base and core of education information, but current web education resources have no structure and it is still difficult to reuse them and make them can be self assembled and developed continually. According to the knowledge structure of course and text, the relation among knowledge points, knowledge units from three levels of media material, we can build education resource components, and build TKCM (Teaching Knowledge Combination Model) based on resource components. Builders can build and assemble knowledge system structure and make knowledge units can be self assembled, thus we can develop and consummate them continually. Users can make knowledge units can be self assembled and renewed, and build education knowledge system to satisfy users' demand under the form of education knowledge system.
Pulaski, Kirt; Casadaban, Cyprian
The coupling of data and knowledge has a synergistic effect when building an intelligent data base. The goal is to integrate the data and knowledge almost to the point of indistinguishability, permitting them to be used interchangeably. Examples given in this paper suggest that Case-Based Reasoning is a more integrated way to link data and knowledge than pure rule-based reasoning.
Ravgiala, Rebekah Rae
Theories regarding the development of expertise hold implications for alternative and traditional certification programs and the teachers they train. The literature suggests that when compared to experts in the field of teaching, the behaviors of novices differ in ways that are directly attributed to their pedagogical content knowledge. However, few studies have examined how first and second year biology teachers entering the profession from traditional and alternative training differ in their demonstration of subject-specific pedagogical content knowledge. The research problem in this multicase, naturalistic inquiry investigated how subject-specific pedagogical content knowledge was manifested among first and second year biology teachers in the task of transforming subject matter into forms that are potentially meaningful to students when explicit formal training has been and has not been imparted to them as preservice teachers. Two first year and two second year biology teachers were the subjects of this investigation. Allen and Amber obtained their certification through an alternative summer training institute in consecutive years. Tiffany and Tricia obtained their certification through a traditional, graduate level training program in consecutive years. Both programs were offered at the same northeastern state university. Participants contributed to six data gathering techniques including an initial semi-structured interview, responses to the Conceptions of Teaching Science questionnaire (Hewson & Hewson, 1989), three videotaped biology lessons, evaluation of three corresponding lesson plans, and a final semi-structured interview conducted at the end of the investigation. An informal, end-of-study survey intended to offer participants an opportunity to disclose their thoughts and needs as first year teachers was also employed. Results indicate that while conceptions of teaching science may vary slightly among participants, there is no evidence to suggest that
Yao Jianchu; Zhou Ji; Yu Jun
A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.
Integration of aquatic ecology and biological oceanographic knowledge for development of area-based eutrophication assessment criteria leading to water resource remediation and utilization management: a case study in Tha Chin, the most eutrophic river of Thailand.
Meksumpun, Charumas; Meksumpun, Shettapong
This research was carried out in Tha Chin Watershed in the central part of Thailand with attempts to apply multidisciplinary knowledge for understanding ecosystem structure and response to anthropogenic pollution and natural impacts leading to a proposal for an appropriate zonation management approach for sustainable utilization of the area. Water quality status of the Tha Chin River and Estuary had been determined by analyzing ecological, hydrological, and coastal oceanographic information from recent field surveys (during March 2006 to November 2007) together with secondary data on irrigation, land utilization, and socio-economic status.Results indicated that the Tha Chin River and Estuary was eutrophic all year round. Almost 100% of the brackish to marine areas reflected strongly hypertrophic water condition during both dry and high-loading periods. High NH(4)(+) and PO(4)(3-) loads from surrounding agricultural land use, agro-industry, and community continuously flew into the aquatic environment. Deteriorated ecosystem was clearly observed by dramatically low DO levels (ca 1 mg/l) in riverine to coastal areas and Noctiluca and Ceratium red tide outbreaks occurred around tidal front closed to the estuary. Accordingly, fishery resources were significantly decreased. Some riverine benthic habitats became dominated by deposit-feeding worms e.g. Lumbriculus, Branchiura, and Tubifex, while estuarine benthic habitats reflected succession of polychaetes and small bivalves. Results on analysis on integrated ecosystem responses indicated that changing functions were significantly influenced by particulates and nutrients dynamics in the system.Based on the overall results, the Tha Chin River and Estuary should be divided into 4 zones (I: Upper freshwater zone; II: Middle freshwater zone; III Lower freshwater zone; and IV: Lowest brackish to marine zone) for further management schemes on water remediation. In this study, the importance of habitat morphology and water flow
Ma, Li; Keinan, Alon; Clark, Andrew G
While the importance of epistasis is well established, specific gene-gene interactions have rarely been identified in human genome-wide association studies (GWAS), mainly due to low power associated with such interaction tests. In this chapter, we integrate biological knowledge and human GWAS data to reveal epistatic interactions underlying quantitative lipid traits, which are major risk factors for coronary artery disease. To increase power to detect interactions, we only tested pairs of SNPs filtered by prior biological knowledge, including GWAS results, protein-protein interactions (PPIs), and pathway information. Using published GWAS and 9,713 European Americans (EA) from the Atherosclerosis Risk in Communities (ARIC) study, we identified an interaction between HMGCR and LIPC affecting high-density lipoprotein cholesterol (HDL-C) levels. We then validated this interaction in additional multiethnic cohorts from ARIC, the Framingham Heart Study, and the Multi-Ethnic Study of Atherosclerosis. Both HMGCR and LIPC are involved in the metabolism of lipids and lipoproteins, and LIPC itself has been marginally associated with HDL-C. Furthermore, no significant interaction was detected using PPI and pathway information, mainly due to the stringent significance level required after correcting for the large number of tests conducted. These results suggest the potential of biological knowledge-driven approaches to detect epistatic interactions in human GWAS, which may hold the key to exploring the role gene-gene interactions play in connecting genotypes and complex phenotypes in future GWAS.
Young, C.; Ballard, S.; Hipp, J.
To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several
1. Introduction - the metaphor of a "knowledge-based economy"; 2. The Triple Helix as a model of the knowledge-based economy; 3. Knowledge as a social coordination mechanism; 4. Neo-evolutionary dynamics in a Triple Helix of coordination mechanism; 5. The operation of the knowledge base; 6. The restructuring of knowledge production in a KBE; 7. The KBE and the systems-of-innovation approach; 8. The KBE and neo-evolutionary theories of innovation; 8.1 The construction of the evolving unit; 8.2 User-producer relations in systems of innovation; 8.3 'Mode-2' and the production of scientific knowledge; 8.4 A Triple Helix model of innovations; 9. Empirical studies and simulations using the TH model; 10. The KBE and the measurement; 10.1 The communication of meaning and information; 10.2 The expectation of social structure; 10.3 Configurations in a knowledge-based economy
Adane Nega Tarekegn
Full Text Available —Knowledge based system can be designed to solve complex medical problems. It incorporates the expert‟s knowledge that has been coded into facts, rules, heuristics and procedures. Incorporation of local languages with the knowledge based system allows endusers communicate with the system in a simpler and easier way. In this study a localized knowledge based system is developed for TB disease diagnosis using Ethiopian national language. To develop the localized knowledge based system, tacit knowledge is acquired from domain experts using interviewing techniques and explicit knowledge is captured from documented sources using relevant documents analysis method. Then the acquired knowledge is modeled using decision tree structure that represents concepts and procedures involved in diagnosis of disease. Production rules are used to represent domain knowledge. The localized knowledge based system is developed using SWI Prolog version 6.4.1 programming language. Prolog supports natural language processing feature to localize the system. As a result, the system is implemented using Amharic language (the national language of Ethiopia user interface. With Localization, users at remote areas and users who are not good in foreign languages are benefited enormously. The system is tested and evaluated to ensure that whether the performance of the system is accurate and the system is usable by physicians and patients. The average performance of the localized knowledge based system has registered 81.5%.
Kelsey, R.L. [Los Alamos National Lab., NM (United States)|New Mexico State Univ., Las Cruces, NM (United States); Hartley, R.T. [New Mexico State Univ., Las Cruces, NM (United States); Webster, R.B. [Los Alamos National Lab., NM (United States)
An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.
The report reviews the economic transition in Korea, summarizing the challenge of the knowledge revolution, to the country's development strategy, and the analytical, and policy framework for a knowledge-based economy. It explores the needs to increase overall productivity, and areas of relative inefficiency, namely, inadequate conditions for generation of knowledge, and information; insuf...
Ling, Chen Wai; Sandhu, Manjit S.; Jain, Kamal Kishore
Purpose: This paper seeks to examine the views of executives working in an American based multinational company (MNC) about knowledge sharing, barriers to knowledge sharing, and strategies to promote knowledge sharing. Design/methodology/approach: This study was carried out in phases. In the first phase, a topology of organizational mechanisms for…
Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.
Knowledge base exchange can be considered as a generalization of data exchange in which the aim is to exchange between a source and a target connected through mappings, not only explicit knowledge, i.e., data, but also implicit knowledge in the form of axioms. Such problem has been investigated rece
In this paper,a brief survey on knowledge-based animation techniques is given.Then a VideoStream-based Knowledge Representation Model(VSKRM)for Joint Objects is presented which includes the knowledge representation of :Graphic Object,Action and VideoStream.Next a general description of the UI framework of a system is given based on the VSKRM model.Finally,a conclusion is reached.
Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative
Fensel, D; Groenboom, R
The paper introduces a software architecture for the specification and verification of knowledge-based systems combining conceptual and formal techniques. Our focus is component-based specification enabling their reuse. We identify four elements of the specification of a knowledge-based system: a ta
Sasson, Amir; Blomgren, Atle
This study presents the Norwegian upstream oil and gas industry (defined as all oil and gas related firms located in Norway, regardless of ownership) and evaluates the industry according to the underlying dimensions of a global knowledge hub - cluster attractiveness, education attractiveness, talent attractiveness, RandD and innovation attractiveness, ownership attractiveness, environmental attractiveness and cluster dynamics.(au)
Williams, K.E.; Kotnour, T.
In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.
Williams, K.E.; Kotnour, T.
In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D.
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Liqin; ZHANG; Hong; LUO; Wenjie; LONG; Yongtao; LEI; Qing; CAI; Mei; LAN; Li; ZHONG
In 2007- 2008,a systematic survey,collection and arrangement was carried out for agricultural biological resources and traditional cultural knowledge of Hani People in 8 counties,15 towns,and 23 village committees of Yunnan Province. A total of 299 samples were obtained about agricultural biological resources related to production and living of Hani People. According to purpose of utilization,samples were divided into grain crops,medicinal plants,vegetables,fruit trees,and oil crops,taking up 48. 2%,21. 7%,18. 4%,7. 7%,and 2. 0% of the samples respectively. The survey indicated that planting industry and breeding industry take up the dominant role in rural social economy of Hani People,so agricultural biological resources are the fundamental means of production maintaining rural social development of Hani People.The current situation of agricultural biological resources of Hani People in Yunnan,reasons for growth and decline were analyzed,and the utilization,protection and development of agricultural biological resources were discussed.
Teachers are the most important factor in student learning (National Research Council, 1996); yet little is known about the specialized knowledge held by experienced teachers. The purpose of this study was twofold: first, to make explicit the pedagogical content knowledge (PCK) for teaching diffusion and osmosis held by experienced biology teachers and, second, to reveal how topic-specific PCK informs teacher practice. The Magnusson et al. (1999) PCK model served as the theoretical framework for the study. The overarching research question was: When teaching lessons on osmosis and diffusion, how do experienced biology teachers draw upon their topic-specific pedagogical content knowledge? Data sources included observations of two consecutive lessons, three semi-structured interviews, lesson plans, and student handouts. Data analysis indicated five of the six teachers held a constructivist orientation to science teaching and engaged students in explorations of diffusion and osmosis prior to introducing the concepts to students. Explanations for diffusion and osmosis were based upon students' observations and experiences during explorations. All six teachers used representations at the molecular, cellular, and plant organ levels to serve as foci for explorations of diffusion and osmosis. Three potential learning difficulties identified by the teachers included: (a) understanding vocabulary terms, (b) predicting the direction of osmosis, and (c) identifying random molecular motion as the driving force for diffusion and osmosis. Participants used student predictions as formative assessments to reveal misconceptions before instruction and evaluate conceptual understanding during instruction. This study includes implications for teacher preparation, research, and policy.
Knowledge acquisition has always been the bottleneck of artificial intelligence. It is the critical point in product family design. Here a knowledge acquisition method was introduced based on scenario model and repository grid and attribute ordering table technology. This method acquired knowledge through providing product design cases to expert, and recording the means and knowledge used by the expert to describe and resolve problems. It used object to express design entity, used scenario to describe the design process, used Event-Condition-Action (ECA) rule to drive design process, and with the help of repository grid and attribute ordering table technology to acquire design knowledge. It's a good way to capture explicit and implicit knowledge. And its validity is proved with respective examples.
This study was designed to better understand how children begin to make the transition from seeing the natural world to scientifically observing the natural world during shared family activity in an informal learning environment. Specifically, this study addressed research questions: (1) What is the effect of differences in parent conversational style and disciplinary knowledge on children's observations of biological phenomena? (2) What is the relationship between parent disciplinary knowledge and conversational style to children's observations of biological phenomena? and (3) Can parents, regardless of knowledge, be trained to use a teaching strategy with their children that can be implemented in informal learning contexts? To address these questions, 79 parent-child dyads with children 6-10 years old participated in a controlled study in which half of the parents used their natural conversational style and the other half were trained to use particular conversational strategies during family observations of pollination in a botanical garden. Parents were also assigned to high and low knowledge groups according to their disciplinary knowledge of pollination. Data sources included video recordings of parent-child observations in a garden, pre-post child tasks, and parent surveys. Findings revealed that parents who received training used the conversational strategies more than parents who used their natural conversational style. Parents and children who knew more about pollination at the start of the study exhibited higher levels of disciplinary talk in the garden, which is to be expected. However, the use of the conversational strategies also increased the amount of disciplinary talk in the garden, independent of what families knew about pollination. The extent to which families engaged in disciplinary talk in the garden predicted significant variance in children's post-test scores. In addition to these findings, an Observation Framework (Eberbach & Crowley, 2009
Slota, Martin; Swift, Terrance
Over the years, nonmonotonic rules have proven to be a very expressive and useful knowledge representation paradigm. They have recently been used to complement the expressive power of Description Logics (DLs), leading to the study of integrative formal frameworks, generally referred to as hybrid knowledge bases, where both DL axioms and rules can be used to represent knowledge. The need to use these hybrid knowledge bases in dynamic domains has called for the development of update operators, which, given the substantially different way Description Logics and rules are usually updated, has turned out to be an extremely difficult task. In [SL10], a first step towards addressing this problem was taken, and an update operator for hybrid knowledge bases was proposed. Despite its significance -- not only for being the first update operator for hybrid knowledge bases in the literature, but also because it has some applications - this operator was defined for a restricted class of problems where only the ABox was all...
TAI Li-gang; ZHONG Ting-xiu
This paper proposes a mechanical product intelligent rapid design approach based on integrated technologies. Adopting knowledge based engineering to reuse and manage product design knowledge, and combining feature modeling and parametric design based on existing CAD/CAE/CAM system and technology of product family modeling and engineering database, the system establishes a product family knowledge base, which mainly including product family case base and rule base. The system also utilizes WEB technology to let customers to individually customize products remotely through internet. And an applicable example is given in the end.
Aim Ecologists seeking to describe patterns at ever larger scales require compilations of data on the global abundance and distribution of species. Comparable compilations of biological data are needed to elucidate the mechanisms behind these patterns, but have received far less attention. We assess the availability of biological data across an entire assemblage: the well-documented demersal marine fauna of the United Kingdom. We also test whether data availability for a species depends on its taxonomic group, maximum body size, the number of times it has been recorded in a global biogeographic database, or its commercial and conservation importance. Location Seas of the United Kingdom. Methods We defined a demersal marine fauna of 973 species from 15 phyla and 40 classes using five extensive surveys around the British Isles. We then quantified the availability of data on eight key biological traits (termed biological knowledge) for each species from online databases. Relationships between biological knowledge and our predictors were tested with generalized linear models. Results Full data on eight fundamental biological traits exist for only 9% (n= 88) of the UK demersal marine fauna, and 20% of species completely lack data. Clear trends in our knowledge exist: fish (median biological knowledge score = six traits) are much better known than invertebrates (one trait). Biological knowledge increases with biogeographic knowledge and (to a lesser extent) with body size, and is greater in species that are commercially exploited or of conservation concern. Main conclusions Our analysis reveals deep ignorance of the basic biology of a well-studied fauna, highlighting the need for far greater efforts to compile biological trait data. Clear biases in our knowledge, relating to how well sampled or \\'important\\' species are suggests that caution is required in extrapolating small subsets of biologically well-known species to ecosystem-level studies. © 2011 Blackwell
LIANG Jun; JIANG Zuhua; ZHEN Lu; SU Hai; WANG Kuoming
To support and serve engineering design, creative design based on knowledge management is proposed. The key knowledge factors of creative design are analyzed and discussed, and knowledge extraction tools are utilized to distill the important knowledge to serve for knowledge resource of creative design. The implementation of creative design mode is described and executed, which can promote the intelligent asset of the enterprise and shorten the period of creative design. With this study, design afflatus and conceptual design can be achieved expediently and effectively.
This paper discusses the progress in transition to knowledge-based economy in Saudi Arabia. As for the methodology, this paper uses updated secondary data obtained from different sources. It uses both descriptive and comparative approaches and uses the OECD definition of knowledge-based economy and
Hartog, J. den; Kate, T. ten; Gerbrands, J.
In this paper, a knowledge-based framework for the top-down interpretation and segmentation of maps is presented. The interpretation is based on a priori knowledge about map objects, their mutual spatial relationships and potential segmentation problems. To reduce computational costs, a global segme
Harandi, Mehdi T.
Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)
Powers, Patrick James
This paper examines the creation of new knowledge bases in higher education in light of the ideas of Alvin Toffler, whose trilogy "Future Shock" (1970), "The Third Wave" (1980), and "Powershift" (1990) focus on the processes, directions, and control of change, respectively. It discusses the increasingly important role that knowledge bases, the…
International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…
Phenotypes are the observable characteristics of an organism, and they are widely recorded in biology and medicine. To facilitate data integration, ontologies that formally describe phenotypes are being developed in several domains. I will describe a formal framework to describe phenotypes. A formalized theory of phenotypes is not only useful for domain analysis, but can also be applied to assist in the diagnosis of rare genetic diseases, and I will show how our results on the ontology of phenotypes is now applied in biomedical research.
The competitive advantages in a knowledge-based economy can no longer be attributed to single nodes in the network. Political economies are increasingly reshaped by knowledge-based developments that upset market equilibria and institutional arrangements. The network coordinates the subdynamics of (i) wealth production, (ii) organized novelty production, and (iii) private appropriation versus public control. The interaction terms generate a complex dynamics which cannot be expected to contain central coordination. However, the knowledge infrastructure of systems of innovations can be measured, for example, in terms of university-industry-government relations. The mutual information in these three dimensions indicates the globalization of the knowledge base. Patent statistics and data from the Internet are compared in terms of this indicator.
Full Text Available BACKGROUND: Constructing coexpression networks and performing network analysis using large-scale gene expression data sets is an effective way to uncover new biological knowledge; however, the methods used for gene association in constructing these coexpression networks have not been thoroughly evaluated. Since different methods lead to structurally different coexpression networks and provide different information, selecting the optimal gene association method is critical. METHODS AND RESULTS: In this study, we compared eight gene association methods - Spearman rank correlation, Weighted Rank Correlation, Kendall, Hoeffding's D measure, Theil-Sen, Rank Theil-Sen, Distance Covariance, and Pearson - and focused on their true knowledge discovery rates in associating pathway genes and construction coordination networks of regulatory genes. We also examined the behaviors of different methods to microarray data with different properties, and whether the biological processes affect the efficiency of different methods. CONCLUSIONS: We found that the Spearman, Hoeffding and Kendall methods are effective in identifying coexpressed pathway genes, whereas the Theil-sen, Rank Theil-Sen, Spearman, and Weighted Rank methods perform well in identifying coordinated transcription factors that control the same biological processes and traits. Surprisingly, the widely used Pearson method is generally less efficient, and so is the Distance Covariance method that can find gene pairs of multiple relationships. Some analyses we did clearly show Pearson and Distance Covariance methods have distinct behaviors as compared to all other six methods. The efficiencies of different methods vary with the data properties to some degree and are largely contingent upon the biological processes, which necessitates the pre-analysis to identify the best performing method for gene association and coexpression network construction.
Full Text Available This article explores the potential of Problem Based Learning (PBL for epistemological competence in< an engineering education area. The main idea is to explore how the processes in PBL promote knowledge acquisition that lead to an individual deep content learning. A review has been done from theoretical and conceptual aspect, as well as supportive evidence from several empirical findings. Within this, knowledge is constructed from the basic knowledge of concepts, principles, and procedural knowledge with integration to previous knowledge and experiences. The concepts and principles are linked and integrated with each other, forming a procedural knowledge, which promotes deep content learning. However, supportive evidence from the recent research literature indicates inconclusive findings, which called for more studies to provide more empirical evidence to investigate the effectiveness of PBL on knowledge acquisitions.
Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz
In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.
Mihaela I. MUNTEAN
Full Text Available Collaboration involves a different approach to business – focused on managing business relationships between people, within or without groups, and within and between organizations. Collaborative enterprises differ from other business in a number of ways and collaborative working needs to be simultaneously a business philosophy, strategy and operational working. Effective collaboration unlocks the potential of the collective knowledge and intellectual capital of the organization and its networks of business partners, suppliers and customers. At the core of true collaboration is the ability to share and catalogue knowledge, ideas, standards, best practices, and lessons learned and to be able to retrieve that knowledge from anywhere at any time. Knowledge management is not a goal by itself. Businesses don't exist with the purpose of spreading and advancing knowledge, they exist for selling competitive products and services of high quality. Based on these considerations, we propose some knowledge management approaches for portal-based collaborative environments.
Johansson, B; Shahsavar, N; Ahlfeldt, H; Wigertz, O
One of the most important categories of decision-support systems in medicine are data driven systems where the inference engine is linked to a database. It is, therefore, important to find methods that facilitate the implementation of database queries referred to in the knowledge modules. A method is described for linking clinical databases to a knowledge base with Arden Syntax modules. The method is based on a query meta-database including templates for SQL queries which is maintained by a database administrator. During knowledge module authoring the medical expert refers only to a code in the query meta-database; no knowledge is needed about the database model or the naming of attributes and relations. The method uses standard tools, such as C+2 and ODBC, which makes it possible to implement the method at many platforms and to link to different clinical databases in a standardized way.
Wilkening, Lisa K.; Carr, Dorthe Bame; Young, Christopher John; Hampton, Jeff (Lockheed Martin Mission Services, Houston, TX); Martinez, Elaine
From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.
赵小蕾; 左晓宇; 覃继恒; 梁岩; 张乃尊; 栾奕昭; 饶绍奇
生物学通路被广泛应用于基因功能学研究,但现有的生物学通路知识并不完善,仍需进一步扩充.生物信息学预测为通路扩充提供了一种有效且经济的途径.文章提出了一种融合蛋白质-蛋白质互作知识以及Gene Ontology(GO)数据库信息进行基因通路预测的新方法.首先选取目标基因在蛋白质-蛋白质互作层面上的邻居所在的Kyoto Encyclopedia of Genes and Genomes(KEGG)通路为候选通路,然后通过检验候选通路中的基因是否在与目标基因关联的GO节点富集来判断目标基因的通路归属.分别利用Human Protein Reference Database(HPRD)和Biological General Repository for Interaction Datasets(BioGRID)数据库中的蛋白质-蛋白质互作信息进行预测.结果表明,在两套数据中,随着互作邻居个数的增加,预测的平均准确率(在所有目标基因注释的通路中被成功预测的比例)及相对准确率(在至少有一个注释通路被成功预测的基因集中,所有注释通路均被预测正确的基因所占的比例)均呈现上升趋势.当互作邻居个数达到22时,预测的平均准确率分别达到96.2％(HPRD)和96.3％(BioGRID),而相对准确率分别为93.3％(HPRD)和84.1％(BioGRID).进一步利用新版数据库对旧版数据库中被更新的89个基因进行验证,至少有一个更新通路被预测正确的基因有50个,其中43个基因的更新通路被完全正确预测,相对准确率为86.0％.这些结果显示该方法是一种可靠且有效的通路扩充方法.
Thompson, Katerina V; Chmielewski, Jean; Gaines, Michael S; Hrycyna, Christine A; LaCourse, William R
The National Experiment in Undergraduate Science Education project funded by the Howard Hughes Medical Institute is a direct response to the Scientific Foundations for Future Physicians report, which urged a shift in premedical student preparation from a narrow list of specific course work to a more flexible curriculum that helps students develop broad scientific competencies. A consortium of four universities is working to create, pilot, and assess modular, competency-based curricular units that require students to use higher-order cognitive skills and reason across traditional disciplinary boundaries. Purdue University; the University of Maryland, Baltimore County; and the University of Miami are each developing modules and case studies that integrate the biological, chemical, physical, and mathematical sciences. The University of Maryland, College Park, is leading the effort to create an introductory physics for life sciences course that is reformed in both content and pedagogy. This course has prerequisites of biology, chemistry, and calculus, allowing students to apply strategies from the physical sciences to solving authentic biological problems. A comprehensive assessment plan is examining students' conceptual knowledge of physics, their attitudes toward interdisciplinary approaches, and the development of specific scientific competencies. Teaching modules developed during this initial phase will be tested on multiple partner campuses in preparation for eventual broad dissemination.
US Fish and Wildlife Service, Department of the Interior — We are developing an inventory of conservation based actions, collaborative structures and sources of governance at multiple scales to serve as a basis for an...
Full Text Available Nowadays, we may observe that the rules of traditional economy have changed. The new economy – the knowledge based economy determine also major change in organizations resources, structure, strategic objectives, departments, accounting, goods. In our research we want to underline how the accounting rules, regulations and paradigms have changed to cope with political, economic and social challenges, as well as to the emergence of knowledge based organization. We also try to find out where Romanian accounting is on the hard road of evolution from traditional to knowledge based.
李莉敏; 唐文献; 方明伦; 杨延麟
The platform of distributed design and resource sharing is important for medium-sized and small companies in developing products to improve competitiveness. As a background of creative product design, a knowledge model based on product collaborative innovation development of products (CIDP) is proposed. Characteristics of CIDP are analyzed, and the framework and key technologies of the CIDP-plafform based knowledge studied. Through integration of existing system and interface designs, a development platform has been built to support the PCID within knowledge-based engineering (KBE). An example is presented, indicating that the prototype system is maneuverable and practical.
Defines grey documentation as documents issued informally and not available through normal channels and discusses the role that grey documentation can play in the social work knowledge base. Topics addressed include grey documentation and science; social work and the empirical approach in knowledge development; and dissemination of grey…
Garicano, Luis; Rossi-Hansberg, Esteban
Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…
FENG Haocheng; LUO Mingqiang; LIU Hu; WU Zhe
Design knowledge and experience are the bases to carry out aircraft conceptual design tasks due to the high complexity and integration of the tasks during this phase.When carrying out the same task,different designers may need individual strategies to fulfill their own demands.A knowledge-based and extensible method in building aircraft conceptual design systems is studied considering the above requirements.Based on the theory,a knowledge-based aircraft conceptual design environment,called knowledge-based and extensible aircraft conceptual design environment (KEACDE) with open architecture,is built as to enable designers to wrap add-on extensions and make their own aircraft conceptual design systems.The architecture,characteristics and other design and development aspects of KEACDE are discussed.A civil airplane conceptual design system (CACDS) is achieved using KEACDE.Finally,a civil airplane design case is presented to demonstrate the usability and effectiveness of this environment.
J. A. Koroleva
Full Text Available Results of retrieval time research of actual data effectiveness search in temporal knowledge bases built in the basis of state of events have been proposed. This type of knowledge base gives the possibility for quick access to relevant states as well as for history based on events chronology. It is shown that data storage for deep retrospective increases significantly the search time due to the growth of the decision tree. The search time for temporal knowledge bases depending on the average number of events prior to the current state has been investigated. Experimental results confirm the advantage of knowledge bases in the basis of state of events over traditional methods for design of intelligent systems.
Comacchio, Anna; Pizzi, Claudio
Enriching understanding of the current theoretical debate on project-based open innovation, ‘Project-based Knowledge in Organizing Open Innovation’ draws on innovation management literature and knowledge-based perspectives to investigate the relationship between knowledge development at project level and the strategic organization of open innovation. Addressing the still open issue of how the firm level should be complemented by studies at the project level of analysis, this book provides theoretical and empirical arguments on the advantages of a more fine-grained level of analysis to understand how firms organize their innovation processes across boundaries. The book also addresses the emerging interest in the management literature on project-based organizations, and on the relevance of project forms of organizing in a knowledge-based economy. Through field research in different industrial settings , this book provides empirical evidence on how firms design open innovation project-by-project and it will ...
calculi have similarly been used for the study of bio-chemical reactive systems. In this dissertation it is argued that techniques rooted in the theory and practice of programming languages, language based techniques if you will, constitute a strong basis for the investigation of models of biological.......g., the effects of receptor defects or drug delivery mechanisms. The property of sequential realisability. which is closely related to the function of biochemical pathways, is addressed by a variant of traditional Data Flow Analysis (DFA). This so-called ‘Pathway Analysis’ computes safe approximations to the set...
Full Text Available As an EU policy agenda, the “knowledge-based bio-economy” (KBBE emphasizes bio-technoscience as the means to reconcile environmental and economic sustainability. This frames the sustainability problem as an inefficiency to be overcome through a techno-knowledge fix. Here ecological sustainability means a benign eco-efficient productivity using resources which are renewable, reproducible and therefore sustainable. The KBBE narrative has been elaborated by European Technology Platforms in the agri-food-forestry-biofuels sectors, whose proposals shape research priorities. These inform policy agendas for the neoliberalization of both nature and knowledge, especially through intellectual property. In these ways, the KBBE can be understood as a new political-economic strategy for sustainable capital. This strategy invests great expectations for unlocking the productive potential of natural resources through a techno-knowledge fix. Although eco-efficiency is sometimes equated with biological productivity, commercial success will be dependent upon new combinations of “living” and “dead” labour.
Suzuki, Einoshin [Yokohama National Univ. (Japan); Shimura, Masamichi [Tokyo Inst. of Technology (Japan)
This paper presents an algorithm for discovering exceptional knowledge from databases. Exceptional knowledge, which is defined as an exception to a general fact, exhibits unexpectedness and is sometimes extremely useful in spite of its obscurity. Previous discovery approaches for this type of knowledge employ either background knowledge or domain-specific criteria for evaluating the possible usefulness, i.e. the interestingness of the knowledge extracted from a database. It has been pointed out, however, that these approaches are prone to overlook useful knowledge. In order to circumvent these difficulties, we propose an information-theoretic approach in which we obtain exceptional knowledge associated with general knowledge in the form of a rule pair using a depth-first search method. The product of the ACEs (Average Compressed Entropies) of the rule pair is introduced as the criterion for evaluating the interestingness of exceptional knowledge. The inefficiency of depth-first search is alleviated by a branch-and-bound method, which exploits the upper-bound for the product of the ACEs. MEPRO, which is a knowledge discovery system based on our approach, has been validated using the benchmark databases in the machine learning community.
Structural choice is a significant decision having an important influence on structural function, socialeconomics, structural reliability and construction cost. A Case Based Reasoning system with its retrieval partconstructed with a KDD subsystem, is put forward to make a decision for a large scale engineering project. Atypical CBR system consists of four parts: case representation, case retriever, evaluation, and adaptation. Acase library is a set of parameterized excellent and successful structures. For a structural choice, the key pointis that the system must be able to detect the pattern classes hidden in the case library and classify the input pa-rameters into classes properly. That is done by using the KDD Data Mining algorithm based on Self-OrganizingFeature Maps ( SOFM), which makes the whole system more adaptive, self-organizing, self-learning and open.
Fransson, Anders; Håkanson, Lars; W. Liesch, Peter
In this note we revisit two core propositions of the knowledge-based view of the firm found in the seminal work of Kogut and Zander: (1) that multinational corporations (MNCs) exist because transfers and re-combinations of knowledge occur more efficiently inside MNCs than between MNCs and third p...... of superior knowledge governance. We question these conclusions, arguing that firms are but one of the many types of “epistemic communities” possessing and nurturing procedural norms, identity, and the cognitive, linguistic and reflexive attributes conducive to efficient exchange and re......-combination of knowledge among their members. Important insights may be gained by applying the concept of epistemic communities implicit in the knowledge-based perspective beyond firm-level hierarchies....
Full Text Available Diasporas stand out as an economic or cultural avant-garde of transformation. This is especially true for academic and other intellectual Diaspora communities, because science and knowledge creation are global enterprises. Proclivity of knowledge workers to move in order to improve and absorb transnational knowledge through Diaspora networks might be an essential quality of an emerging national economy of a developing country. The article treats the role of expert Diaspora in knowledge based economy, innovation and talent management. Besides presenting the essentials of knowledge based economy and innovation, it discusses the role of expert Diaspora in science, technology and innovation (STI capacity building. Also, the article emphasizes the importance of leadership for talent and its implications for Diaspora. Using WEF statistics, it illustrates negative consequences of the sad policy of “Chaseaway the brightest and the best” for innovative capacity, competitiveness, and prosperity of nations.
Mendonça, E A; Cimino, J J
As part of an effort to develop a knowledge base to support searching online medical literature according to individual needs, we have studied the possibility of using the co-occurrence of MeSH terms in MEDLINE citations associated with the search strategies optimal for evidence based medicine to automated construction of a knowledge base. This study evaluates the relevance of the relationships between the semantic relationship pairs generated by the process, and the clinical validity of the semantic types involved in the process. From the semantic pairs proposed by our method, a group of clinicians judge sixty percent to be relevant. The remaining forty percent included semantic types considered unimportant by clinicians. The knowledge extraction method showed reasonable results. We believe it can be appropriate for the task of retrieving information from the medical record in order to guide users during a searching and retrieval process. Future directions include the validation of the knowledge, based on an evaluation of system performance.
Wang Leigang; Deng Dongrnei; Liu Zhubai
Guided by developing forging technology theory,designing rules on rotor forging process are summed up.Knowledge-based CAPP system for rotor forging is created.The system gives a rational and optimum process.
National Aeronautics and Space Administration — This presentation will review the use of knowledge management in the development and support of Condition Based Maintenance (CBM) systems for complex systems with...
Qi, Guilin; Liu, Weiru; Bell, David A
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal- ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possi- bilistic knowledge base as the result of merg- ing. In such a case, we argue that it is not always necessary to keep weighted informa- tion after merging. In this paper, we define a merging operator that maps a set of pos- sibilistic knowledge bases and a formula rep- resenting the integrity constraints to a clas- sical knowledge base by using lexicographic ordering. We show that it satisfies nine pos- tulates that generalize basic...
Abstract This paper describes the system in Vital Filtering and Streaming Slot Filling task of TREC 2014 Knowledge Base Acceleration Track. In the Vital ...can help update a knowledge base like Wikipedia. The KBA2014 includes three tasks: Vital Filtering(VF) task ,Streaming Slot Filling task and Accelerate...Create. The third task is new open track which is not evaluated. For the Vital Filtering task, given a fixed list of target entities from
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004
Juttner, Melanie; Boone, Williame; Park, Soonhye; Neuhaus, Birgit J.
Research on teachers' professionalism and professional development has increased in the last two decades. A main focus of this line of research has been the cognitive component of teacher professionalism, i.e., professional knowledge. Most of the previous studies on teacher knowledge--such as the Learning Mathematics for Teaching (LMT) (Hill et…
Desai, D. K.; Pal, S.; Navathe, S. B.; Doty, K. L.
Manufacturing industries have greatly emphasized the need to "integrate" various manufacturing functions-Design, Planning, and Business Operations- into a unified and well coordinated system, so as to increase productivity. In order to achieve this goal, the various CAD/CAM systems must have a common engineering and manufacturing knowledge base. We propose a Knowledge Based Manufacturing System(KBMS) that will help the Manufacturing Engineer(ME) to directly model the workcell environment. The system consists of two main modules: the Workcell Modelling Facility and the Task Planner. The Workcell Modelling Facility helps the ME to create workcell models using the workcell components, product parts, and manufacturing operations contained in a pre-defined knowledge base. The system also allows the manufacturing engineer to add information to the existing knowledge bases schemas. The Task Planner accesses these knowledge bases to generate a network of proposed actions from a given production goal. Integration of the proposed KBMS with a Geometric Modelling System will provide the ME with a tool to perform off-line animation of the Manufacturing Process in a particular workcell model. A prototype KBMS is currently being implemented at the University of Florida using the Object-oriented Semantic Association Model(OSAM*) as the underlying data model for the Knowledge Bases. OSAM* provides the object-oriented features of inheritance and encapsulation of data, as well as the ability to represent complex relationships between object classes in semantic nets.
Yu-Rong Cheng; Ye Yuan; Jia-Yu Li; Lei Chen; Guo-Ren Wang
With more and more knowledge provided by WWW, querying and mining the knowledge bases have attracted much research attention. Among all the queries over knowledge bases, which are usually modelled as graphs, a keyword query is the most widely used one. Although the problem of keyword query over graphs has been deeply studied for years, knowledge bases, as special error-tolerant graphs, lead to the results of the traditional defined keyword queries out of users’ satisfaction. Thus, in this paper, we define a new keyword query, called confident r-clique, specific for knowledge bases based on the r-clique definition for keyword query on general graphs, which has been proved to be the best one. However, as we prove in the paper, finding the confident r-cliques is #P-hard. We propose a filtering-and-verification framework to improve the search eﬃciency. In the filtering phase, we develop the tightest upper bound of the confident r-clique, and design an index together with its search algorithm, which suits the large scale of knowledge bases well. In the verification phase, we develop an eﬃcient sampling method to verify the final answers from the candidates remaining in the filtering phase. Extensive experiments demonstrate that the results derived from our new definition satisfy the users’ requirement better compared with the traditional r-clique definition, and our algorithms are eﬃcient.
Full Text Available This paper presents a model based on fuzzy set theory for determining the score of knowledge management in organization. The introduced model has five stages. In the first stage, input and output variable of model are characterized by available theories. Inputs are as follows: knowledge acquisition, knowledge storage, knowledge creation, knowledge sharing and knowledge transfer. The output is as follow score of knowledge management in organization. In the second stage, the input and output are converted into fuzzy numbers after classification. Inference rules are explained in the third stage. In the fourth stage, defuzzification is performed, and in the fifth stage, the devised system is tested. The test result shows that the presented model has high validity. Ultimately, by using the designed model, the score of knowledge management for Tabriz Kar machinery industry was calculated. The statistical population consists of 50 members of this organization. All the population has been studied. A questionnaire was devised, and its validity and reliability were confirmed. The result indicated that the score of knowledge management in Tabriz Kar machinery industry with the membership rank of 0.924 was at an average level and with the membership rank of 0.076 was at a high
Ana Maria de Andrade Caldeira
Full Text Available This paper describes the developing and validating steps of a Likert’s evaluative scale. The aim is to systematize the answers of Biological Sciences students about to: 1 Understanding or notunderstanding thescientific knowledge; and 2.If there is a relationship amongscientific concepts in order to contemplate a systemic thinking about natural phenomena. The described scale was validated by Cronbach's Alpha tests (α = 0.741, KMO (0.779 and Bartlett (0.000 and a Multivariate Analysis was fulfilled, typePrincipal Component Analysis (PCA. We understood that this kind of instrument allows a large amount of data to be collected and it groups can be compared efficiently, which justified the development of evaluative scale presented here.
Saleem, Arshad; Lind, Morten
This paper presents a mechanism for developing knowledge based support in multiagent based control and diagnosis. In particular it presents a way for autonomous agents to utilize a qualitative means-ends based model for reasoning about control situations. The proposed mechanism have been used in ...
Full Text Available The widespread enthusiasm for a knowledge-based approach to understanding the nature of a business and the possible basis for sustained competitive advantage have renewed interest in human capital evaluation or measurement. While many attempts have been made to develop methods for measuring intellectual capital, none have been widely adopted in the business world. In the knowledge-based organizations, and generally, in the information society, human capital is recognized as the fundamental factor of overall progress, and experts agree that long-term investment in human capital has strong drive-propagation effects at the individual, organizational, national and global level. In this paper, we consider that a knowledge-based approach can offer new possibilities and answers to illustrate the importance of evaluation the human capital and knowledge assets by consistently generating added value in the business world.
Guang Lan Zhang
Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
As background knowledge of geographic information retrieval (GIR),the gazetteers have their limitations. In this paper we propose to develop and implement a com-mon sense geographic knowledge base (CSGKB) instead of the gazetteers. We define that CSGKB is concerned with the representation of geographic knowledge in human brain and the simulation of geographic reasoning in daily life. Traditional geographic information system (GIS) is based on the model of map with its data based on geographic coordinates and its computation based on geometry. However,CSGKB,which is made up of geographic features and relationships and is based on qualitative spatio-temporal reasoning,can be viewed as the direct model of geographic world. This paper also discusses the characters of CSGKB and pre-sents its structure which is composed of knowledge base,inference engine,geo-graphic ontology and learner. The applications using CSGKB include geographic information retrieval (GIR),natural language processing (NLP),named entity rec-ognition (NER),Semantic Web,etc. At present,our work focuses on the design of geographic ontology and the implementation of the CSGKB knowledge base. In this paper we describe the CSGKB ontology structure,top ontology,geographic loca-tion ontology,spatial relationship ontology,and domain ontologies. Finally,we in-troduce the current state of implementation of CSGKB and give an outlook on our future researches.
Full Text Available One of the goals of the graduate Program in Molecular Biology from UNIFESP (PrMB -UNIFESP is to contribute for continuing education of biology teachers from public high schools. A close relation between university and public schools is an important channel for dissemination of scientific knowledge. Thus, a 40h Molecular Biology updating course was offered to 20 high school teachers. The objective was to discuss genomic and proteomic advances and their application. The course was organized by graduate students from PrMB -UNIFESP. Three groups ofstudents were formed, two being responsible for theorical and practical classes and one for global logistic including searching for financial support. The themes presented to the teachers were flow of genetic information, recombinant DNA, gene cloning, transgenic plants and animals, mutation, super bacteria and stem cell. The teachers also had hands-on classes including DNA extraction, PCR, gene cloning and SDS-PAGE. The teachers received an assignment to go back to their s chools and do some activity with their students that would be related to the themes discussed. The students produced videos, discussions, posters, theater, experimental models and pratical classes related to the course themes. After 3 months the teachers r eturned to show their students’ work. We conclude that information was transmitted to the teachers, updating them, and to high school students, that learned science in a entertaining way. Also, the graduate students had an experience on how to organize a c ourse including all its responsibilities.
In the 1970s and 1980s, forms of user-based and cognitive approaches to knowledge organization came to the forefront as part of the overall development in library and information science and in the broader society. The specific nature of userbased approaches is their basis in the empirical studies......’s PageRank are not based on the empirical studies of users. In knowledge organization, the Book House System is one example of a system based on user studies. In cognitive science the important WordNet database is claimed to be based on psychological research. This article considers such examples....... The role of the user is often confused with the role of subjectivity. Knowledge organization systems cannot be objective and must therefore, by implication, be based on some kind of subjectivity. This subjectivity should, however, be derived from collective views in discourse communities rather than...
Waters, Theodore E A; Fraley, R Chris; Groh, Ashley M; Steele, Ryan D; Vaughn, Brian E; Bost, Kelly K; Veríssimo, Manuela; Coppola, Gabrielle; Roisman, Glenn I
There is increasing evidence that attachment representations abstracted from childhood experiences with primary caregivers are organized as a cognitive script describing secure base use and support (i.e., the secure base script). To date, however, the latent structure of secure base script knowledge has gone unexamined-this despite that such basic information about the factor structure and distributional properties of these individual differences has important conceptual implications for our understanding of how representations of early experience are organized and generalized, as well as methodological significance in relation to maximizing statistical power and precision. In this study, we report factor and taxometric analyses that examined the latent structure of secure base script knowledge in 2 large samples. Results suggested that variation in secure base script knowledge-as measured by both the adolescent (N = 674) and adult (N = 714) versions of the Attachment Script Assessment-is generalized across relationships and continuously distributed.
Knight, K; Knight, Kevin; Luk, Steve K.
Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual match...
Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.
The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.
Stantchev, Vladimir; Franke, Marc Roman
Knowledge-based enterprises are typically conducting a large number of research and development projects simultaneously. This is a particularly challenging task in complex and diverse project landscapes. Project Portfolio Management (PPM) can be a viable framework for knowledge and innovation management in such landscapes. A standardized process with defined functions such as project data repository, project assessment, selection, reporting, and portfolio reevaluation can serve as a starting point. In this work we discuss the benefits a multidimensional evaluation framework can provide for knowledge-based enterprises. Furthermore, we describe a knowledge and learning strategy and process in the context of PPM and evaluate their practical applicability at different stages of the PPM process.
Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y. [Centre of Advanced Research on Energy, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, Durian Tunggal, Melaka (Malaysia)
The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.
Workman, T Elizabeth; Fiszman, Marcelo; Cairelli, Michael J; Nahl, Diane; Rindflesch, Thomas C
Findings from information-seeking behavior research can inform application development. In this report we provide a system description of Spark, an application based on findings from Serendipitous Knowledge Discovery studies and data structures known as semantic predications. Background information and the previously published IF-SKD model (outlining Serendipitous Knowledge Discovery in online environments) illustrate the potential use of information-seeking behavior in application design. A detailed overview of the Spark system illustrates how methodologies in design and retrieval functionality enable production of semantic predication graphs tailored to evoke Serendipitous Knowledge Discovery in users.
Full Text Available Domain specific knowledge representation is achieved through the use of ontologies. The ontology model of software risk management is an effective approach for the intercommunion between people from teaching and learning community, the communication and interoperation among various knowledge oriented applications, and the share and reuse of the software. But the lack of formal representation tools for domain modeling results in taking liberties with conceptualization. This paper narrates an ontology based semantic knowledge representation mechanism and the architecture we proposed has been successfully implemented for the domain software riskmanagement.
The field of knowledge-based systems (KBS) has expanded enormously during the last years, and many important techniques and tools are currently available. Applications of KBS range from medicine to engineering and aerospace.This book provides a selected set of state-of-the-art contributions that present advanced techniques, tools and applications. These contributions have been prepared by a group of eminent researchers and professionals in the field.The theoretical topics covered include: knowledge acquisition, machine learning, genetic algorithms, knowledge management and processing under unc
A good corporate culture based on humanistic theory can make the enterprise's management very effective, all enterprise's members have strong cohesion and centripetal force. With experiential learning model, the enterprise can establish an enthusiastic learning spirit corporate culture, have innovation ability to gain the positive knowledge growth effect, and to meet the fierce global marketing competition. A case study on Trend's corporate culture can offer the proof of industry knowledge growth rate equation as the contribution to experiential learning corporate culture management.
Riedesel, Joel D.
Artificial intelligence application requirements demand powerful representation capabilities as well as efficiency for real-time domains. Many tools exist, the most prevalent being expert systems tools such as ART, KEE, OPS5, and CLIPS. Other tools just emerging from the research environment are truth maintenance systems for representing non-monotonic knowledge, constraint systems, object oriented programming, and qualitative reasoning. Unfortunately, as many knowledge engineers have experienced, simply applying a tool to an application requires a large amount of effort to bend the application to fit. Much work goes into supporting work to make the tool integrate effectively. A Knowledge Management Design System (KNOMAD), is described which is a collection of tools built in layers. The layered architecture provides two major benefits; the ability to flexibly apply only those tools that are necessary for an application, and the ability to keep overhead, and thus inefficiency, to a minimum. KNOMAD is designed to manage many knowledge bases in a distributed environment providing maximum flexibility and expressivity to the knowledge engineer while also providing support for efficiency.
In the 1970s and 1980s, forms of user-based and cognitive approaches to knowledge organization came to the forefront as part of the overall development in library and information science and in the broader society. The specific nature of userbased approaches is their basis in the empirical studies...... of users or the principle that users need to be involved in the construction of knowledge organization systems. It might seem obvious that user-friendly systems should be designed on user studies or user involvement, but extremely successful systems such as Apple’s iPhone, Dialog’s search system and Google......’s PageRank are not based on the empirical studies of users. In knowledge organization, the Book House System is one example of a system based on user studies. In cognitive science the important WordNet database is claimed to be based on psychological research. This article considers such examples...
To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.
Ramstein, G; Bernadet, M
Chromosome, a knowledge-based analysis system has been designed for the classification of human chromosomes. Its aim is to perform an optimal classification by driving a tool box containing the procedures of image processing, pattern recognition and classification. This paper presents the general architecture of Chromosome, based on a multiagent system generator. The image processing tool box is described from the met aphasic enhancement to the fine classification. Emphasis is then put on the knowledge base intended for the chromosome recognition. The global classification process is also presented, showing how Chromosome proceeds to classify a given chromosome. Finally, we discuss further extensions of the system for the karyotype building.
There are two basic approaches to representation of design knowledge in knowledge-based CAAD systems, the type-based approach which has a long tradition, and the more recent typeless approach. Proponents of the latter have offered a number of arguments against the type-based approach which...... are reviewed in the paper and found rather unconvincing. Work within both paradigms is also reviewed and problems encountered under both approaches discussed. The analysis concludes with a recommendation of preserving the notion of types but in a form that avoids rigidity, and with hints at how certain...
Full Text Available Knowledge management is a challenging task especially in administrative processes with a typical workflow such as higher educational institutions and Universities. We have proposed a system aSPOCMS (An Agent-based Semantic Web for Paperless Office Content Management System that aims at providing paperless environment for the typical workflows of the universities, which requires ontology based knowledge management to manage the files and documents of various departments and sections of a university. In Semantic Web, Ontology describes the concepts, relationships among the concepts and properties within their domain. It provides automatic inferring and interoperability between applications which is an appropriate vision for knowledge management. In this paper we discussed, how Semantic Web technology can be utilized in higher educational institution for knowledge representation of various resources and handling the task of administrative processes. This requires exploitation of knowledge of various resources such as department, school, section, file and employee etc. of the University by aSPOCMS which is built as an agent-based system using the ontology for communication between agent, user and for knowledge representation and management.
This paper presents a framework called logical knowledge object (LKO),which is taken as a basis of the dependable development of knowledge based systems(KBSs).LKO combines logic programming and object-oriented programming paradigms,where objects are viewed as abstractions with states,constraints,behaviors and inheritance.The operational semantics defined in the style of natural semantics is simple and clear.A hybrid knowledge representation amalgamating rule,frame,semantic network and blackboard is available for both most structured and flat knowledge.The management of knowledge bases has been formally specified.Accordingly,LKO is well suited for the formal representation of knowledge and requirements of KBSs.Based on the framework,verification techniques are also explored to enhance the analysis of requirement specifications and the validation of KBSs.In addition,LKO provides a methodology for the development of KBSs,applying the concepts of rapid prototyping and top-down design to deal with changing and incomplete requirements,and to provide multiple abstract models of the domain,where formal methods might be used at each abstract level.
Großschedl, Jörg; Mahler, Daniela; Kleickmann, Thilo; Harms, Ute
Teachers' content-related knowledge is a key factor influencing the learning progress of students. Different models of content-related knowledge have been proposed by educational researchers; most of them take into account three categories: content knowledge, pedagogical content knowledge, and curricular knowledge. As there is no consensus about…
Shah, Mamta; Foster, Aroutis
Research focusing on the development and assessment of teacher knowledge in game-based learning is in its infancy. A mixed-methods study was undertaken to educate pre-service teachers in game-based learning using the Game Network Analysis (GaNA) framework. Fourteen pre-service teachers completed a methods course, which prepared them in game…
WANG Lan-cheng; JIANG Dan; LE Jia-jin
This paper is based on two existing theories about automatic indexing of thematic knowledge concept. The prohibit-word table with position information has been designed. The improved Maximum Matching-Minimum Backtracking method has been researched. Moreover it has been studied on improved indexing algorithm and application technology based on rules and thematic concept word table.
Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte
Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…
es for analyzing, understanding, reforming and perfecting theobjective world. This paper presents a Systems-Science-Based Knowledge Model (SSBKM) to establish a more general knowledge structure model. It can be regarded as a development of frame representation for discovering and constructing slot structures as well as the frame structures. With this model the paper also presents a System-Sciences-Based Object-Oriented Analysis method (SSBOOA), which is a strategy to find and determine object classes and class structures, the relations between object instances of different classes, not to just explain classes. Finally, the paper illustrates knowledge analysis and computerizing (synthesizing) steps in an example of SSBKM of cognitive psychology-based CAI Network for Teaching Middle School Mathematics.
HAN Yan-ling; YANG Bing-ru; CAO Shou-qi
During the procedure of fault diagnosis for large-scale complicated equipment, the existence of redundant and fuzzy information results in the difficulty of knowledge access. Aiming at this characteristic, this paper brought forth the Rough Set (RS) theory to the field of fault diagnosis. By means of the RS theory which is predominant in the way of dealing with fuzzy and uncertain information,knowledge access about fault diagnosis was realized. The foundation ideology of the RS theory was exhausted in detail, an amended RS algorithm was proposed, and the process model of knowledge access based on the amended RS algorithm was researched. Finally, we verified the correctness and the practicability of this method during the procedure of knowledge access.
Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.
In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.
Subramanian, Viswanath; Biswas, Gautam; Bezdek, James C.
This paper presents the design and development of a prototype document retrieval system using a knowledge-based systems approach. Both the domain-specific knowledge base and the inferencing schemes are based on a fuzzy set theoretic framework. A query in natural language represents a request to retrieve a relevant subset of documents from a document base. Such a query, which can include both fuzzy terms and fuzzy relational operators, is converted into an unambiguous intermediate form by a natural language interface. Concepts that describe domain topics and the relationships between concepts, such as the synonym relation and the implication relation between a general concept and more specific concepts, have been captured in a knowledge base. The knowledge base enables the system to emulate the reasoning process followed by an expert, such as a librarian, in understanding and reformulating user queries. The retrieval mechanism processes the query in two steps. First it produces a pruned list of documents pertinent to the query. Second, it uses an evidence combination scheme to compute a degree of support between the query and individual documents produced in step one. The front-end component of the system then presents a set of document citations to the user in ranked order as an answer to the information request.
Full Text Available The designing of PID controllers is a frequently discussed problem. Many of design methods have been developed, classic (analytical tuning methods, optimization methods etc. or not so common fuzzy knowledge based methods which are designed to achieve good setpoint following, corresponding time response etc. In this case, the new way of designing PID controller parameters is created where the above mentioned knowledge system based on relations of Ziegler-Nichols design methods is used, more precisely the combination of the both Ziegler-Nichols methods. The proof of efficiency of a proposed method and a numerical experiment is presented.
Literature-based knowledge discovery method was introduced by Dr. Swanson in 1986. He hypothesized a connection between Raynaud's phenomenon and dietary fish oil, the field of literature-based discovery (LBD) was born from then on. During the subsequent two decades, LBD's research attracts some scientists including information science, computer science, and biomedical science, etc.. It has been a part of knowledge discovery and text mining. This paper summarizes the development of recent years about LBD and presents two parts, methodology research and applied research. Lastly, some problems are pointed as future research directions.
Regassa, Laura B.; Morrison-Shetlar, Alison I.
Inquiry-based learning was used to enhance an undergraduate molecular biology course at Georgia Southern University, a primarily undergraduate institution in rural southeast Georgia. The goal was to use a long-term, in-class project to accelerate higher-order thinking, thereby enabling students to problem solve and apply their knowledge to novel…
Full Text Available The viability and success of modern enterprises are subject to the increasing dynamic of the economic environment, so they need to adjust rapidly their policies and strategies in order to respond to sophistication of competitors, customers and suppliers, globalization of business, international competition. Perhaps the most critical component for success of the modern enterprise is its ability to take advantage of all available information - both internal and external. Making sense of all this information, gaining value and competitive advantage through represents real challenges for the enterprise. The IT solutions designed to address these challenges have been developed in two different approaches: structured data management (Business Intelligence and unstructured content management (Knowledge Management. Integrating Business Intelligence and Knowledge Management in new software applications designated not only to store highly structured data and exploit it in real time but also to interpret the results and communicate them to decision factors provides real technological support for Strategic Management. Integrating Business Intelligence and Knowledge Management in order to respond to the challenges the modern enterprise has to deal with represents not only a "new trend" in IT, but a necessity in the emerging knowledge based economy. These hybrid technologies are already widely known in both scientific and practice communities as Competitive Intelligence. In the end of paper,a competitive datawarehouse design is proposed, in an attempt to apply business intelligence technologies to economic environment analysis making use of romanian public data sources.
Full Text Available Abstract Background Currently, data about age-phenotype associations are not systematically organized and cannot be studied methodically. Searching for scientific articles describing phenotypic changes reported as occurring at a given age is not possible for most ages. Results Here we present the Age-Phenome Knowledge-base (APK, in which knowledge about age-related phenotypic patterns and events can be modeled and stored for retrieval. The APK contains evidence connecting specific ages or age groups with phenotypes, such as disease and clinical traits. Using a simple text mining tool developed for this purpose, we extracted instances of age-phenotype associations from journal abstracts related to non-insulin-dependent Diabetes Mellitus. In addition, links between age and phenotype were extracted from clinical data obtained from the NHANES III survey. The knowledge stored in the APK is made available for the relevant research community in the form of 'Age-Cards', each card holds the collection of all the information stored in the APK about a particular age. These Age-Cards are presented in a wiki, allowing community review, amendment and contribution of additional information. In addition to the wiki interaction, complex searches can also be conducted which require the user to have some knowledge of database query construction. Conclusions The combination of a knowledge model based repository with community participation in the evolution and refinement of the knowledge-base makes the APK a useful and valuable environment for collecting and curating existing knowledge of the connections between age and phenotypes.
Full Text Available Managerial competencies identification and development are important tools of human resources management that is aimed at achieving strategic organizational goals. Due to current dynamic development and changes, more and more attention is being paid to the personality of managers and their competencies, since they are viewed as important sources of achieving a competitive advantage. The objective of this article is to identify managerial competencies in the process of filling vacant working positions in knowledge-based organizations in the Czech Republic. The objective was determined with reference to the Czech Science Foundation GACR research project which focuses on the identification of managerial competencies in knowledge-based organizations in the Czech Republic. This identification within the frame of the research project is primarily designed and subsequently realised on the basis of content analysis of media communications such as advertisements - a means through which knowledge- based organizations search for suitable candidates for vacant managerial positions. The first part of the article deals with theoretical approaches to knowledge-based organizations and issues on competencies. The second part evaluates the outcomes of the survey carried out, and also summarizes the basic steps of the application of competencies. The final part summarizes the benefits and difficulties of applying the competency-based approach as a tool of efficient management of organizations for the purpose of achieving a competitive advantage.
Thiele, Herbert; Glandorf, Jörg; Hufnagel, Peter
With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScape bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists; current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor) that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.
Schoof, Heiko; Ernst, Rebecca; Nazarov, Vladimir; Pfeifer, Lukas; Mewes, Hans-Werner; Mayer, Klaus F X
Arabidopsis thaliana is the most widely studied model plant. Functional genomics is intensively underway in many laboratories worldwide. Beyond the basic annotation of the primary sequence data, the annotated genetic elements of Arabidopsis must be linked to diverse biological data and higher order information such as metabolic or regulatory pathways. The MIPS Arabidopsis thaliana database MAtDB aims to provide a comprehensive resource for Arabidopsis as a genome model that serves as a primary reference for research in plants and is suitable for transfer of knowledge to other plants, especially crops. The genome sequence as a common backbone serves as a scaffold for the integration of data, while, in a complementary effort, these data are enhanced through the application of state-of-the-art bioinformatics tools. This information is visualized on a genome-wide and a gene-by-gene basis with access both for web users and applications. This report updates the information given in a previous report and provides an outlook on further developments. The MAtDB web interface can be accessed at http://mips.gsf.de/proj/thal/db.
王向华; 覃征; 何坚; 王志敏
Differences in the structure and semantics of knowledge that is created and maintained by the various actors on the World Wide Web make its exchange and utilization a problematic task. This is an important issue facing organizations undertaking knowledge management initiatives. An XML-based and ontology-supported knowledge description language (KDL) is presented, which has three-tier structure (core KDL, extended KDL and complex KDL), and takes advantages of strong point of ontology, XML, description logics, frame-based systems. And then, the framework and XMLbased syntax of KDL are introduced, and the methods of translating KDL into first order logic (FOL) are presented. At last, the implementation of KDL on the Web is described, and the reasoning ability of KDL proved by experiment is illustrated in detail.
An important issue in Knowledge Discovery in Databases is to allo the discovered knowledge to be as close as possible to natural languages to satisfy user needs with tractability on one hand,and to offer KDD systems robustness on the other hand.At this junction,this paper describes a new concept of linguistic atoms with three digital characteristics:expected value Ex,entropy En,anddeviation D.The mathematical description has effectively integrated the fuzziness and randomness of linguistic terms in a unified way.Based on this model a method of knowledge representation in KDD is developed which bridges the gap between quantitative knowledge and qualitative knowledge.Mapping between quantitatives and qualitatives becomes much easier and interchangeable.In order to discover generalized knowledge from a database,one may use virtual linguistic terms and cloud transforms for the auto-generation of concept hierarchies to attributes.Predictive data mining with the cloud model is given for implementation.This further illustrates the advantages of this linguistic model in KDD.
Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf
For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and V...
Full Text Available Introduction: Utilizing as well as transferring knowledge can be provided via motivating teachers, educating researchers, better utilizing of evidence and creating communication between members of the scientific communities based on the needs of the community and community education. Therefore, the present study mainly aimed at evaluating the process of knowledge production and use of evidence in the research centres of Tehran and Iran University of Medical Sciences. Moreover, this study intended to investigate its application in improving the health system and community education of students. Methods: In this descriptive cross-sectional study, the study population finally consisted of 68 research centres affiliated to Tehran and Iran University of Medical Sciences. In order to glean the study data, a questionnaire was utilized by Nejat et al. in two fields of “knowledge production” and “promote using of evidence”, and the study data were analyzed using SPSS (version 18. Results: The production and use of knowledge status in Tehran and Iran medical universities in regard with “knowledge and evidence production” used in decision-making was reported in a favourable condition. Moreover, an unfavourable condition was revealed regarding “promoting use of evidence” which needs proper intervention. Conclusion: The study finding revealed that at the beginning of the formulation of each research, identifying the specific audience of the study results causes the produced evidence and knowledge to be applicable. This leads to conducting research in accordance with the needs of community. As a result, status of medical universities in Iran necessitates to be reviewed. Ameliorating production status and promoting evidence-based knowledge can lead to a significant qualitative development in community education.
ADELA ANCA FUCEC
Full Text Available The transition to the knowledge-based economy in Romania is the main path towards obtaining a sustainable economic growth and may even be the feasible solution our country needs in order to exit the current economic crisis. The knowledge-based organizations are the main vector, a necessary and ireplaceable condition and factor for the creation of the knowledge economy, therefore every leader should be at least familiar with the premises needed to increase the number of these kind of organizations in Romania and sustain a propitious environment for their development. Therefore, the purpose of this paper is to identify and unfold the premises that need to be fulfilled in order to facilitate Romania’s transition to the knowledge economy. To this end, following the macroeconomic and microeconomic situation, not only from an economical perspectrive, but also from a managerial one, there have been two categories of premises, which are actually conditioning elements for Romania’s transition to the knowledge economy. At macroeconomic level, the main premise is that of the need to substantiate (found, elaborate and implement a genuine strategy for the transition to the knowledge economy; at microeconomic level, the organizations need to embrace a strategic management, relocate their attention towards the human resources, receive support from the IT departments and give the proper importance to organizational culture and the processes related to change management. By emphasizing the details of these premises, the objectives of ilustrating Romania’s vulnerabillities and needs regarding the transition to the knowledge economy have been attained.
GUO Yinqiao; ZHAO Chuande; WANG Wenxin; LI Cundong
Based on the relationship between crops and circumstances,a dynamic knowledge model for maize management with wide applicability was developed using the system method and mathematical modeling technique.With soft component characteristics incorporated,a component and digital knowledge model-based decision support system for maize management was established on the Visual C++platform.This system realized six major functions:target yield calculation,design of pre-sowing plan,prediction of regular indices,real-time management control,expert knowledge reference and system administration.Cases were studied on the target yield knowledge model with data sets that include different eco-sites,yield levels of the last three years,and fertilizer and water management levels.The results indicated that this system overcomes the shortcomings of traditional expert systems and planting patterns,such as sitespecific conditions and narrow applicability,and can be used more under different conditions and environments.This system provides a scientific knowledge system and a broad decision-making tool for maize management.
Sidoran, Karen M.
Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.
Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Rambely, Azmin Sham
The outbreak of information in a borderless world has prompted lecturers to move forward together with the technological innovation and erudition of knowledge in performing his/her responsibility to educate the young generations to be able to stand above the crowd at the global scene. Teaching and Learning through web-based learning platform is a…
Full Text Available This paper examines the role of mobile communication, mobile tools and work practices in the context of organizations, especially knowledge-based organizations. Today, organizations are highly complex and diverse. Not surprisingly, various solutions to incorporating mobile tools and mobile communication in organizations have been devised. Challenges to technological development and research on mobile communication are presented.
This paper examines the role of mobile communication, mobile tools and work practices in the context of organizations, especially knowledge-based organizations. Today, organizations are highly complex and diverse. Not surprisingly, various solutions to incorporating mobile tools and mobile communication in organizations have been devised. Challenges to technological development and research on mobile communication are presented.
Dogan-Dunlap, Hamide; Torres, Cristina; Chen, Fan
The paper provides a college mathematics student's concept maps, definitions, and essays to support the thesis that language-based prior knowledge can influence students' cognitive processes of mathematical concepts. A group of intermediate algebra students who displayed terms mainly from the spoken language on the first and the second concept…
Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…
This article investigates the societal conditions that might help the establishment of peer-to-peer modes of production. First, the context within which such a new model is emerging - the neoliberal knowledge-based-societies - is described, and its shortcomings unveiled; and second, a robust...
Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.
The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…
Nawani, Jigna; Rixius, Julia; Neuhaus, Birgit J.
Empirical analysis of secondary biology classrooms revealed that, on average, 68% of teaching time in Germany revolved around processing tasks. Quality of instruction can thus be assessed by analyzing the quality of tasks used in classroom discourse. This quasi-experimental study analyzed how teachers used tasks in 38 videotaped biology lessons pertaining to the topic 'blood and circulatory system'. Two fundamental characteristics used to analyze tasks include: (1) required cognitive level of processing (e.g. low level information processing: repetiition, summary, define, classify and high level information processing: interpret-analyze data, formulate hypothesis, etc.) and (2) complexity of task content (e.g. if tasks require use of factual, linking or concept level content). Additionally, students' cognitive knowledge structure about the topic 'blood and circulatory system' was measured using student-drawn concept maps (N = 970 students). Finally, linear multilevel models were created with high-level cognitive processing tasks and higher content complexity tasks as class-level predictors and students' prior knowledge, students' interest in biology, and students' interest in biology activities as control covariates. Results showed a positive influence of high-level cognitive processing tasks (β = 0.07; p observed effect of higher content complexity tasks on students' cognitive knowledge structure. Presented findings encourage the use of high-level cognitive processing tasks in biology instruction.
This study examined how learners construct textbase and situation model knowledge in hypertext computer-based learning environments (CBLEs) and documented the influence of specific self-regulated learning (SRL) tactics, prior knowledge, and characteristics of the learner on posttest knowledge scores from exposure to a hypertext. A sample of 160…
Full Text Available The use of the Knowledge Management is fundamental in the creation of value within the companies, being at the present time a new form to obtain competitive advantages in specific market. Also, for the process of value creation is necessary to use specifics Information Technologies that they allow to reach the objectives drawn up when implementing a Knowledge Management project. In this sense, one of the more complete and efficient Information Technologies is the Knowledge Based System that as well comprises of the Knowledge Engineering. This article tries to analyze the existing relation between Knowledge Management, a specific model of knowledge creation, the Knowledge Based System and how this Information Technologies play a very important role in the creation, codification and transference of knowledge.
Pemsel, Sofia; Wiewiorac, Anna; Müller, Ralf;
This paper conceptualizes and defines knowledge governance (KG) in project-based organizations (PBOs). Two key contributions towards a multi-faceted view of KG and an understanding of KG in PBOs are advanced, as distinguished from knowledge management and organizational learning concepts....... The conceptual framework addresses macro- and micro-level elements of KG and their interaction. Our definition of KG in PBOs highlights the contingent nature of KG processes in relation to their organizational context. These contributions provide a novel platform for understanding KG in PBOs....
Full Text Available This paper reports on the outcomes of a recent study carried out at Universidad de La Salle, which intended to describe and reflect upon what five (5 pre-service teachers from last semesters pointed out as the most important elements of knowledge base that teachers should know in order to become English language teachers. The instruments utilized in this qualitative research to gather information from the participants were students’ journals, a phenomenological interview and a survey. Results indicate that elements such as the language (English command, students’ preferences and realities, and the control of a class, among others, come to be essential areas teachers should be knowledgeable in.
为揭示知识创造的本质和规律，采用二分法将新达尔文主义和拉马克主义两种观点予以融合。在此基础上，根据生物进化原理，围绕知识的类基因特性对知识基因、知识进化的选择、重组和变异以及知识进化与环境的关系进行解释。研究结果表明：纯粹的知识创造主要是新达尔文式的演化，体现知识创造的不确定性、随机性和不可控性。相对应地，知识的复制、流动等过程则具有更多的拉马克现象，带有明确的指向且结果可以被预测。知识基因的选择、重组和变异是知识得以被创造的根本原因。知识创造的方向受到环境的影响，知识创造的动力源于知识主体对环境的学习，新知识能够对环境进行改变。%In order to reveal the nature and patterns of knowledge creation, this thesis combines Neo-Darwinism with Lamarckism using Dichotomy. Based on biological evolution theories, we interpret knowledge gene, knowledge evolution's selection, restructuring and variation, as well as the relationship between knowledge evolution and the environment, with the acknowledgement that knowledge has characteristics in common with gene. Research results show that, knowledge creation in itself is evolution of Neo-Darwinism, reflecting that knowledge creation is uncer-tain, random and unpredictable. Correspondingly, Lamarckism could be reflected in knowledge's replication and flows, with certain direction and predictable results. The selection, restructuring and variation of knowledge are the basic reason why knowledge could be created. The direction of knowledge creation is affected by the environ-ment. The driving force of knowledge creation has its origin in the learning process of knowledge workers in the environment, and new knowledge can change the environment accordingly.
Forbes, Christina M; O'Leary, Niall D; Dobson, Alan D; Marchesi, Julian R
The role that microorganisms play in the biological removal of phosphate from wastewater streams has received sustained interest since its initial observation over 30 years ago. Recent advances in 'omic'-based approaches have greatly advanced our knowledge in this field and facilitated a refinement of existing enhanced biological phosphate removal (EBPR) models, which were primarily based on culture-dependent approaches that had predominantly been used to investigate the process. This minireview will focus on the recent advances made in our overall understanding of the EBPR process resulting from the use of 'omic'-based methodologies.
Background: Many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within these ontologies relies on the use of automated reasoning. Results: We have developed the Aber-OWL infrastructure that provides reasoning services for bio-ontologies. Aber-OWL consists of an ontology repository, a set of web services and web interfaces that enable ontology-based semantic access to biological data and literature. Aber-OWL is freely available at http://aber-owl.net. Conclusions: Aber-OWL provides a framework for automatically accessing information that is annotated with ontologies or contains terms used to label classes in ontologies. When using Aber-OWL, access to ontologies and data annotated with them is not merely based on class names or identifiers but rather on the knowledge the ontologies contain and the inferences that can be drawn from it.
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.
Full Text Available Knowledge-based organizations (KBO are usually considered to be those whose product or service is knowledge-intensive. The characteristics of a KBO, however, go beyond product to include process, purpose and perspective. Process refers to an organization’s knowledge based activities and processes. Purpose refers to its mission and strategy. Perspective refers to the worldview and culture that influence and constrain an organization’s decisions and actions. In order for organizations to remain globally competitive, new tools for decision-making are required. Of these tools, it is internationally recognized that Competitive Intelligence (CI is fast becoming a norm rather than the exception to assist management with decision-making in the modern knowledge-based organization. The purpose of competitive intelligence in the organization is to support and lead to management decisions and action. There exists a clear and concise link between competitiveness and the process of innovation. Innovation depends on a number of factors for success. Of these factors, information and intelligence are believed to be primary drivers. The main purpose of this paper is to explain the need for competitive intelligence in KBO and to present a model for the implementation of competitive intelligence in the KBO and the involvement of the information professional in the competitive intelligence process.
Zhou, Wen; Jia, Yifan
Link prediction is the task of mining the missing links in networks or predicting the next vertex pair to be connected by a link. A lot of link prediction methods were inspired by evolutionary processes of networks. In this paper, a new mechanism for the formation of complex networks called knowledge dissemination (KD) is proposed with the assumption of knowledge disseminating through the paths of a network. Accordingly, a new link prediction method-knowledge dissemination based link prediction (KDLP)-is proposed to test KD. KDLP characterizes vertex similarity based on knowledge quantity (KQ) which measures the importance of a vertex through H-index. Extensive numerical simulations on six real-world networks demonstrate that KDLP is a strong link prediction method which performs at a higher prediction accuracy than four well-known similarity measures including common neighbors, local path index, average commute time and matrix forest index. Furthermore, based on the common conclusion that an excellent link prediction method reveals a good evolving mechanism, the experiment results suggest that KD is a considerable network evolving mechanism for the formation of complex networks.
Conceição da Costa, Maria
Full Text Available This article aims to analyse the participation of women scientist in knowledge production within the Genome Project sponsored by FAPESP (The State of São Paulo Research Foundation. Between 1997 and 2003, FAPESP invested approximately 33 million euros to develop the FAPESP Genome Project (PGF, generating major changes in Molecular Biology in Brazil: institutions devoted to fostering science and technology have been investing large sum of money; bioinformatics became one of the fields with great demand for professionals, and the results of the Xylella Genome Project, first organism sequenced in Brazil, were published in several international scientific journals including Nature, and Brazil became the first country to develop genome projects outside USA, Europe and Japan. As a consequence of this process, women scientists were loosing space as “spokespersons of this new science”, playing secondary roles at the project.Este artículo tiene como objetivo analizar la participación de las mujeres en la producción de conocimiento del proyecto genoma financiado por la FAPESP (Fundación de Apoyo a la Investigación del Estado de São Paulo. Entre 1997 y 2003, FAPESP invirtió aproximadamente 33 millones de euros en el desarrollo del Proyecto Genoma Fapesp (PGF, provocando importantes cambios en la Biología Molecular brasileña: las instituciones de fomento a la investigación comenzaron a promoverla con grandes financiaciones; la bioinformática se tornó uno de los campos con mayor demanda de profesionales y, por fin, los resultados del Proyecto Genoma de la Xylella Fastidiosa, primer organismo vivo secuenciado en Brasil, se publicaron en revistas científicas internacionales, como Nature. Con ello se convierte en el primer país fuera de la tríada EUA-Europa-Japón en desarrollar proyectos genoma. Como consecuencia del proceso, las mujeres están perdiendo espacio como “portavoces de esta nueva ciencia”, ocupando papeles secundarios en el
Full Text Available Taking into consideration the current conditions, in which “information is often assimilated with power” the major interest for most organizations stands in collecting the necessary knowledge at a high qualitative level and using it with maximum efficiency, through its materialization into adequate managerial conducts, actions and decisions. Together with the assurance of the material, human resources, both international and financial, the performing organizations are more and more preoccupied by the production, transmitting, usage, depositing and protection of knowledge, especially of the strategic ones, essential for the companies’ development. The information became more and more a resource, a major asset, a main product and at the same time a strategic advantage for organizations, fact that has a significant influence over the content and the way of manifestation of the management, imposing with acuteness the promotion of the management based on knowledge.
Full Text Available Framed within the world of Artificial Intelligence, and more precisely within the project FunGramKB, i.e. a user-friendly environment for the semiautomatic construction of a multipurpose lexico-conceptual knowledge base for Natural Language Processing systems, the aim of this paper is two-fold. Firstly, we shall provide a necessarily non-exhaustive theoretical discussion of FunGramKB in which we will introduce the main elements that make up its Ontology (i.e. Thematic Frames, Meaning Postulates, different types of concepts, etc.. Secondly, we will describe the meticulous process carried out by knowledge engineers when populating this conceptually-driven Ontology. In doing so, we shall examine various examples belonging to the domain of ‘change’ or #TRANSFORMATION (in the COREL notation, in an attempt to show how conceptual knowledge can be modeled in for Artificial Intelligence purposes.
Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw
The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.
Full Text Available The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method’ selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods’ selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.
Full Text Available This paper motivates, presents and demonstrates in use a methodology based in complex network analysis to support research aimed at identification of sources in the process of knowledge transfer at the interorganizational level. The importance of this methodology is that it states a unified model to reveal knowledge sharing patterns and to compare results from multiple researches on data from different periods of time and different sectors of the economy. This methodology does not address the underlying statistical processes. To do this, national statistics departments (NSD provide documents and tools at their websites. But this proposal provides a guide to model information inferences gathered from data processing revealing links between sources and recipients of knowledge being transferred and that the recipient detects as main source to new knowledge creation. Some national statistics departments set as objective for these surveys the characterization of innovation dynamics in firms and to analyze the use of public support instruments. From this characterization scholars conduct different researches. Measures of dimensions of the network composed by manufacturing firms and other organizations conform the base to inquiry the structure that emerges from taking ideas from other organizations to incept innovations. These two sets of data are actors of a two- mode-network. The link between two actors (network nodes, one acting as the source of the idea. The second one acting as the destination comes from organizations or events organized by organizations that “provide” ideas to other group of firms. The resulting demonstrated design satisfies the objective of being a methodological model to identify sources in knowledge transfer of knowledge effectively used in innovation.
O'Connor, Thomas G
This month's collation of papers deals with social behaviors that operationalize key constructs in fields covered by the journal, including attachment theory and parenting; emotional regulation; psychopathology of several forms; general and specific cognitive abilities. Notably, many examples are offered of how these social behaviors link with biology. That is an obvious and important direction for clinical research insofar as it helps to erase a perceptual chasm and artificial duality between 'behavior' and 'biology'. But, although it must be the case that social behavior has biological connections of one sort or other, identifying reliable connections with practical application has proved to be a non-trivial challenge. In particular, the challenge seems to be in measuring social behavior meaningfully enough that it could be expected to have a biological pulse, and in measuring biological markers systematically enough that emergent-downstream effects would surface. Associations are not especially uncommon, but it has been a frustrating task in constructing a practically broad model from a bricolage of scattered and disconnected parts and findings in the literature. Several reports in this issue offer contrasts that may help move along this line of study.
This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.
Full Text Available Problem statement: Research into robot motion control offers research opportunities that will change scientists and engineers for year to come. Autonomous robots are increasingly evident in many aspects of industry and everyday life and a robust robot motion control can be used for homeland security and many consumer applications. This study discussed the adaptive fuzzy knowledge based controller for robot motion control in indoor and outdoor environment. Approach: The proposed method consisted of two components: the process monitor that detects changes in the process characteristics and the adaptation mechanism that used information passed to it by the process monitor to update the controller parameters. Results: Experimental evaluation had been done in both indoor and outdoor environment where the robot communicates with the base station through its Wireless fidelity antenna and the performance monitor used a set of five performance criteria to access the fuzzy knowledge based controller. Conclusion: The proposed method had been found to be robust.
Karpov, V.; Denikin, A. S.; Alekseev, A. P.; Zagrebaev, V. I.; Rachkov, V. A.; Naumenko, M. A.; Saiko, V. V.
Principles underlying the organization and operation of the NRV web knowledge base on low-energy nuclear physics (http://nrv.jinr.ru) are described. This base includes a vast body of digitized experimental data on the properties of nuclei and on cross sections for nuclear reactions that is combined with a wide set of interconnected computer programs for simulating complex nuclear dynamics, which work directly in the browser of a remote user. Also, the current situation in the realms of application of network information technologies in nuclear physics is surveyed. The potential of the NRV knowledge base is illustrated in detail by applying it to the example of an analysis of the fusion of nuclei that is followed by the decay of the excited compound nucleus formed.
Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method
Nikolsky, Yuri; Kirillov, Eugene; Zuev, Roman; Rakhmatulin, Eugene; Nikolskaya, Tatiana
Analysis of microarray, SNPs, proteomics, and other high-throughput (OMICs) data is challenging because of its biological complexity and high level of technical and biological noise. One way to deal with both problems is to perform analysis with a high-fidelity annotated knowledge base of protein interactions, pathways, and functional ontologies. This knowledge base has to be structured in a computer-readable format and must include software tools for managing experimental data, analysis, and reporting. Here we present MetaDiscovery, an integrated platform for functional data analysis which is being developed at GeneGo for the past 8 years. On the content side, MetaDiscovery encompasses a comprehensive database of protein interactions of different types, pathways, network models and 10 functional ontologies covering human, mouse, and rat proteins. The analytical toolkit includes tools for gene/protein list enrichment analysis, statistical "interactome" tool for identification of over- and under-connected proteins in the data set, and a network module made up of network generation algorithms and filters. The suite also features MetaSearch, an application for combinatorial search of the database content, as well as a Java-based tool called MapEditor for drawing and editing custom pathway maps. Applications of MetaDiscovery include identification of potential biomarkers and drug targets, pathway hypothesis generation, analysis of biological effects for novel small molecule compounds, and clinical applications (analysis of large cohorts of patients and translational and personalized medicine).
In this session, Session JP4, the discussion focuses on the following topics: Hematopoiesis Dynamics in Irradiated Mammals, Mathematical Modeling; Estimating Health Risks in Space from Galactic Cosmic Rays; Failure of Heavy Ions to Affect Physiological Integrity of the Corneal Endothelial Monolayer; Application of an Unbiased Two-Gel CDNA Library Screening Method to Expression Monitoring of Genes in Irradiated Versus Control Cells; Detection of Radiation-Induced DNA Strand Breaks in Mammalian Cells By Enzymatic Post-Labeling; Evaluation of Bleomycin-Induced Chromosome Aberrations Under Microgravity Conditions in Human Lymphocytes, Using "Fish" Techniques; Technical Description of the Space Exposure Biology Assembly Seba on ISS; and Cytogenetic Research in Biological Dosimetry.
Trujillo, Caleb M; Anderson, Trevor R; Pelaez, Nancy J
In biology and physiology courses, students face many difficulties when learning to explain mechanisms, a topic that is demanding due to the immense complexity and abstract nature of molecular and cellular mechanisms. To overcome these difficulties, we asked the following question: how does an instructor transform their understanding of biological mechanisms and other difficult-to-learn topics so that students can comprehend them? To address this question, we first reviewed a model of the components used by biologists to explain molecular and cellular mechanisms: the MACH model, with the components of methods (M), analogies (A), context (C), and how (H). Next, instructional materials were developed and the teaching activities were piloted with a physical MACH model. Students who used the MACH model to guide their explanations of mechanisms exhibited both improvements and some new difficulties. Third, a series of design-based research cycles was applied to bring the activities with an improved physical MACH model into biology and biochemistry courses. Finally, a useful rubric was developed to address prevalent student difficulties. Here, we present, for physiology and biology instructors, the knowledge and resources for explaining molecular and cellular mechanisms in undergraduate courses with an instructional design process aimed at realizing pedagogical content knowledge for teaching. Our four-stage process could be adapted to advance instruction with a range of models in the life sciences.
Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj
Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the
Armstrong, H.M.; Harris, J.M.; Young, C.J.
The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by
William Hazelton; Suresh Moolgavkar; E. Georg Luebeck
This past year we have made substantial progress in modeling the contribution of homeostatic regulation to low-dose radiation effects and carcinogenesis. We have worked to refine and apply our multistage carcinogenesis models to explicitly incorporate cell cycle states, simple and complex damage, checkpoint delay, slow and fast repair, differentiation, and apoptosis to study the effects of low-dose ionizing radiation in mouse intestinal crypts, as well as in other tissues. We have one paper accepted for publication in ''Advances in Space Research'', and another manuscript in preparation describing this work. I also wrote a chapter describing our combined cell-cycle and multistage carcinogenesis model that will be published in a book on stochastic carcinogenesis models edited by Wei-Yuan Tan. In addition, we organized and held a workshop on ''Biologically Based Modeling of Human Health Effects of Low dose Ionizing Radiation'', July 28-29, 2005 at Fred Hutchinson Cancer Research Center in Seattle, Washington. We had over 20 participants, including Mary Helen Barcellos-Hoff as keynote speaker, talks by most of the low-dose modelers in the DOE low-dose program, experimentalists including Les Redpath (and Mary Helen), Noelle Metting from DOE, and Tony Brooks. It appears that homeostatic regulation may be central to understanding low-dose radiation phenomena. The primary effects of ionizing radiation (IR) are cell killing, delayed cell cycling, and induction of mutations. However, homeostatic regulation causes cells that are killed or damaged by IR to eventually be replaced. Cells with an initiating mutation may have a replacement advantage, leading to clonal expansion of these initiated cells. Thus we have focused particularly on modeling effects that disturb homeostatic regulation as early steps in the carcinogenic process. There are two primary considerations that support our focus on homeostatic regulation. First, a number of
Conclusions The 'knowledge entry' function allows fast formal representation of clinical knowledge (<14 minutes per disease and testing using the integrated quality management system. In the near future, new measures must be found to improve the problematic representation of disease time processes, descriptions, warnings and graphics to formally represent clinical knowledge using the medrapid 'knowledge entry' function.
Full Text Available Tweets provide a continuous update on current events. However, Tweets are short, personalized and noisy, thus raises more challenges for event extraction and representation. Extracting events out of Arabic tweets is a new research domain where few examples – if any – of previous work can be found. This paper describes a knowledge-based approach for fostering event extraction out of Arabic tweets. The approach uses an unsupervised rule-based technique for event extraction and provides a named entity disambiguation of event related entities (i.e. person, organization, and location. Extracted events and their related entities are populated to the event knowledge base where tagged tweets’ entities are linked to their corresponding entities represented in the knowledge base. Proposed approach was evaluated on a dataset of 1K Arabic tweets covering different types of events (i.e. instant events and interval events. Results show that the approach has an accuracy of, 75.9% for event trigger extraction, 87.5% for event time extraction, and 97.7% for event type identification.
Andreasen, Troels; Nilsson, Jørgen Fischer
We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life science...... negation in description logic. We embed the natural logic in DATALOG clauses which is to take care of the computational inference in connection with querying.......We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences......, where the large and evolving knowledge specifications should be directly accessible to domain experts. Moreover, natural logic comes with intuitive inference rules. The considered version of natural logic leans toward the closed world assumption (CWA) unlike the open world assumption with classical...
Andreasen, Troels; Nilsson, Jørgen Fischer
We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life science...... negation in description logic. We embed the natural logic in DATALOG clauses which is to take care of the computational inference in connection with querying......We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences......, where the large and evolving knowledge specifications should be directly accessible to domain experts. Moreover, natural logic comes with intuitive inference rules. The considered version of natural logic leans toward the closed world assumption (CWA) unlike the open world assumption with classical...
Cucinotta, Francis A.
Exposures from galactic cosmic rays (GCR) - made up of high-energy protons and high-energy and charge (HZE) nuclei, and solar particle events (SPEs) - comprised largely of low- to medium-energy protons are the primary health concern for astronauts for long-term space missions. Experimental studies have shown that HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation, making risk assessments for cancer and degenerative risks, such as central nervous system effects and heart disease, highly uncertain. The goal for space radiation protection at NASA is to be able to reduce the uncertainties in risk assessments for Mars exploration to be small enough to ensure acceptable levels of risks are not exceeded and to adequately assess the efficacy of mitigation measures such as shielding or biological countermeasures. We review the recent BEIR VII and UNSCEAR-2006 models of cancer risks and their uncertainties. These models are shown to have an inherent 2-fold uncertainty as defined by ratio of the 95% percent confidence level to the mean projection, even before radiation quality is considered. In order to overcome the uncertainties in these models, new approaches to risk assessment are warranted. We consider new computational biology approaches to modeling cancer risks. A basic program of research that includes stochastic descriptions of the physics and chemistry of radiation tracks and biochemistry of metabolic pathways, to emerging biological understanding of cellular and tissue modifications leading to cancer is described.
Santosa, Munirah Mohamad; Low, Blaise Su Jun; Pek, Nicole Min Qian; Teo, Adrian Kee Keong
In the field of stem cell biology and diabetes, we and others seek to derive mature and functional human pancreatic β cells for disease modeling and cell replacement therapy. Traditionally, knowledge gathered from rodents is extended to human pancreas developmental biology research involving human pluripotent stem cells (hPSCs). While much has been learnt from rodent pancreas biology in the early steps toward Pdx1(+) pancreatic progenitors, much less is known about the transition toward Ngn3(+) pancreatic endocrine progenitors. Essentially, the later steps of pancreatic β cell development and maturation remain elusive to date. As a result, the most recent advances in the stem cell and diabetes field have relied upon combinatorial testing of numerous growth factors and chemical compounds in an arbitrary trial-and-error fashion to derive mature and functional human pancreatic β cells from hPSCs. Although this hit-or-miss approach appears to have made some headway in maturing human pancreatic β cells in vitro, its underlying biology is vaguely understood. Therefore, in this mini-review, we discuss some of these late-stage signaling pathways that are involved in human pancreatic β cell differentiation and highlight our current understanding of their relevance in rodent pancreas biology. Our efforts here unravel several novel signaling pathways that can be further studied to shed light on unexplored aspects of rodent pancreas biology. New investigations into these signaling pathways are expected to advance our knowledge in human pancreas developmental biology and to aid in the translation of stem cell biology in the context of diabetes treatments.
ZHU XiaoHua; FENG XiaoMing; ZHAO YingShi
Quantitative remote sensing inversion is ill-posed.The Moderate Resolution Imaging Spectroradiometer at 250 m resolution (MODIS_250m) contains two bands.To deal with this ill-posed inversion of MODIS_250m data,we propose a framework,the Multi-scale,Multi-stage,Sample-direction Dependent,Target-decisions (Multi-scale MSDT) inversion method,based on spatial knowledge.First,MODIS images (1 km,500 m,250 m) are used to extract multi-scale spatial knowledge.The inversion accuracy of MODIS_1km data is improved by reducing the impact of spatial heterogeneity.Then,coarse-scale inversion is taken as prior knowledge for the fine scale,again by inversion.The prior knowledge is updated after each inversion step.At each scale,MODIS_1km to MODIS_250m,the inversion is directed by the Uncertainty and Sensitivity Matrix (USM),and the most uncertain parameters are inversed by the most sensitive data.All remote sensing data are involved in the inversion,during which multi-scale spatial knowledge is introduced,to reduce the impact of spatial heterogeneity.The USM analysis is used to implement a reasonable allocation of limited remote sensing data in the model space.In the entire multi-scale inversion process,field data,spatial knowledge and multi-scale remote sensing data are all involved.As the multi-scale,multi-stage inversion is gradually refined,initial expectations of parameters become more reasonable and their uncertainty range is effectively reduced,so that the inversion becomes increasingly targeted.Finally,the method is tested by retrieving the Leaf Area Index (LAI) of the crop canopy in the Heihe River Basin.The results show that the proposed method is reliable.
McAllister, M.; Tressel, S.
In the age of information, scientists and non-scientists alike expect answers to their questions to be available on LCD display with just a few clicks of a mouse. Over the past decade, NSIDC User Services has seen a sizable increase in total data users, with a growing percentage coming from non-science backgrounds. In order to meet the demands of so many curious minds and to better appeal to the diversifying user community, NSIDC User Services is in the process of utilizing professional helpdesk software to create NSIDC Knowledge Base: a multimedia platform for supporting data users. Ultimately, searchable, referenced articles on common user problems and FAQ's will appear beside video tutorials demonstrating how to use the data. Links to other data centers' user support departments will be offered when questions expand beyond the scope of NSIDC. NSIDC Knowledge Base aims to be a resource allowing users to help themselves as well as a gateway to finding resources at related data centers.
Giani, U; Martone, P
This paper is an attempt to develop a distance learning model grounded upon a strict integration of problem based learning (PBL), dynamic knowledge networks (DKN) and web tools, such as hypermedia documents, synchronous and asynchronous communication facilities, etc. The main objective is to develop a theory of distance learning based upon the idea that learning is a highly dynamic cognitive process aimed at connecting different concepts in a network of mutually supporting concepts. Moreover, this process is supposed to be the result of a social interaction that has to be facilitated by the web. The model was tested by creating a virtual classroom of medical and nursing students and activating a learning session on the concept of knowledge representation in health sciences.
Yinan Zhao,Fengcong Li,; Xiaolin Qiao
The detection performance and the constant false alarm rate behavior of the conventional adaptive detectors are severely degraded in heterogeneous clutter. This paper designs and analy-ses a knowledge-based (KB) adaptive polarimetric detector in het-erogeneous clutter. The proposed detection scheme is composed of a data selector using polarization knowledge and an adaptive polarization detector using training data. A polarization data se-lector based on the maximum likelihood estimation is proposed to remove outliers from the heterogeneous training data. This selector can remove outliers effectively, thus the training data is purified for estimating the clutter covariance matrix. Consequently, the performance of the adaptive detector is improved. We assess the performance of the KB adaptive polarimetric detector and the adaptive polarimetric detector without a data selector using sim-ulated data and IPIX radar data. The results show that the KB adaptive polarization detector outperforms its non-KB counter-parts.
Mawussi, Kwamiwi; Tapie, Laurent
International audience; Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and ...
Furian, Robert; Lacroix, Frank,; STOKIC, DRAGAN; Correia, Ana; Grama, Cristina; Faltus, Stefan; Maksimovic, Maksim; Grote, Karl-Heinrich; Beyer, Christiane
Part 2: Design, Manufacturing and Production Management; International audience; The objective of the research is to examine and develop new methods and tools for management of knowledge in Lean Product development. Lean Product development attempts to apply lean philosophy and principles within product development process. Special emphasis is given to the so-called Set Based Lean Design principles. Such product development process requires innovative methodologies and tools for capturing, re...
Steps of manipulation is required to complete the m od eling of the connection elements such as bolt, pin and the like in commerce CAD system. It leads to low efficiency, difficulty to assure the relative position, impossibility to express rules and knowledge. Based on the inner character analy sis of interpart, detail modification and assembly relation of mechanical connec ting element, the idea, which extends the feature modeling of part to the interp art feature modeling for assembly purpose, is presen...
In this paper we address the issues involved in developing a knowledge based system (KBS) [Jac90], [Fro86] for constructing promotions of various kinds of produce within supermarkets [R.J89]. We are currently working with a large retail company which supplies both expertise and sample data. Specifically, the system under development is to be used to determine the promotional layout and `worth'' of promotions within the fresh produce section, i.e. vegetables, salads and fruit. `Worth'' transla...
Full Text Available The quantity of thoracic radiographies in the medical field is ever growing. An automated system for segmenting the images would help doctors enormously. Some approaches are knowledge-based; therefore we propose here an ontology for this purpose. Thus it is machine oriented, rather than human-oriented. That is all the structures visible on a thoracic image are described from a technical point of view.
Mori, Hiroshi; Yamada, Takuji; Kurokawa, Ken
Microbes are essential for every part of life on Earth. Numerous microbes inhabit the biosphere, many of which are uncharacterized or uncultivable. They form a complex microbial community that deeply affects against surrounding environments. Metagenome analysis provides a radically new way of examining such complex microbial community without isolation or cultivation of individual bacterial community members. In this article, we present a brief discussion about a metagenomics and the development of knowledge bases, and also discuss about the future trends in metagenomics.
Zhao, Pengjun; Lu, Bo
This study examines knowledge-based urban development in Beijing with the objective of revealing the impact of the 'synergetic' forces of globalisation and local government intervention on knowledge-based urban development in the context of the coexisting processes of globalisation and decentralisat
Full Text Available Purpose: Within project-based supply chain inter-organizational cooperative innovation, the achievement of project value-adding reflects by factors such as project-based organizational effect level, the relationship between project cooperative innovation objectives etc. The purpose is to provide a reliable reference for the contractor reasonably allocate the effect level and resources between the knowledge input and innovation stage and realize the knowledge collaboration for project-based supply chain. Design/methodology/approach: Based on the assumption of equal cooperation between project-based organizations, from the view of maximizing project value-adding and the relationship of effect cost between knowledge input and innovation stage in consideration, the knowledge collaborative incentive model for project-based supply chain inter-organizational cooperative innovation was established, and solved through the first-order and second-order approach, then the digital simulation and example analysis were presented. Findings: The results show that, the project management enterprise resorted to adjust project knowledge collaboration incentive intensity and implemented knowledge input-innovation coordinative incentive strategy, not only could achieve project value-adding maximization, but also could realize net earnings Pareto improvement between project management enterprise and contractor. Research limitations/implications: To simplify the knowledge flow among project-based organizations, the knowledge flow in the model hypothesis is presented as knowledge input and knowledge innovation stage, thus it may affect the final analysis results. Originality/value: In construction project practice, knowledge is become more and more important to achieve project value-adding. The research can provide a theoretical guideline for the project-based organizations, such as the contractor, the owner, especially how to utilize their core knowledge.
Full Text Available Rare attempts to use knowledge technologies and other relevant approaches are found in the river basin management. Some applications of expert systems as well as utilization of soft computing techniques (as neural networks or genetic algorithms are known in an experimental level. Knowledge management approaches still have not been used at all. In this paper we discuss knowledge-based approaches in the river basin management as a difficult yet important direction which could be proven to be helpful. We summarize the research done in the scope of the AQUIN project, one of first Czech knowledge management projects in the river basin management. The project was initiated by the water management company in Pilsen, where dispatchers make decisions about manipulations on the reservoir Nýrsko, the strategic source of drinking water for inhabitants of Pilsen. The project aim was to support dispatchers' decision making under a high degree of uncertainty or data shortage. The research is continued in the scope of a new project AQUINpro, planned for the period of 2006 to 2008.
Natália Chaves Lessa Schots
Full Text Available Background: Process performance analysis is a key step for implementing continuous improvement in software organizations. However, the knowledge to execute such analysis is not trivial and the person responsible to executing it must be provided with appropriate support. Aim: This paper presents a knowledge-based environment, named SPEAKER, proposed for supporting software organizations during the execution of process performance analysis. SPEAKER comprises a body of knowledge and a set of activities and tasks for software process performance analysis along with supporting tools to executing these activities and tasks. Method: We conducted an informal literature reviews and a systematic mapping study, which provided basic requirements for the proposed environment. We implemented the SPEAKER environment integrating supporting tools for the execution of activities and tasks of performance analysis and the knowledge necessary to execute them, in order to meet the variability presented by the characteristics of these activities. Results: In this paper, we describe each SPEAKER module and the individual evaluations of these modules, and also present an example of use comprising how the environment can guide the user through a specific performance analysis activity. Conclusion: Although we only conducted individual evaluations of SPEAKER’s modules, the example of use indicates the feasibility of the proposed environment. Therefore, the environment as a whole will be further evaluated to verify if it attains its goal of assisting in the execution of process performance analysis by non-specialist people.
Comas, J; Meabe, E; Sancho, L; Ferrero, G; Sipma, J; Monclús, H; Rodriguez-Roda, I
MBR technology is currently challenging traditional wastewater treatment systems and is increasingly selected for WWTP upgrading. MBR systems typically are constructed on a smaller footprint, and provide superior treated water quality. However, the main drawback of MBR technology is that the permeability of membranes declines during filtration due to membrane fouling, which for a large part causes the high aeration requirements of an MBR to counteract this fouling phenomenon. Due to the complex and still unknown mechanisms of membrane fouling it is neither possible to describe clearly its development by means of a deterministic model, nor to control it with a purely mathematical law. Consequently the majority of MBR applications are controlled in an "open-loop" way i.e. with predefined and fixed air scour and filtration/relaxation or backwashing cycles, and scheduled inline or offline chemical cleaning as a preventive measure, without taking into account the real needs of membrane cleaning based on its filtration performance. However, existing theoretical and empirical knowledge about potential cause-effect relations between a number of factors (influent characteristics, biomass characteristics and operational conditions) and MBR operation can be used to build a knowledge-based decision support system (KB-DSS) for the automatic control of MBRs. This KB-DSS contains a knowledge-based control module, which, based on real time comparison of the current permeability trend with "reference trends", aims at optimizing the operation and energy costs and decreasing fouling rates. In practice the automatic control system proposed regulates the set points of the key operational variables controlled in MBR systems (permeate flux, relaxation and backwash times, backwash flows and times, aeration flow rates, chemical cleaning frequency, waste sludge flow rate and recycle flow rates) and identifies its optimal value. This paper describes the concepts and the 3-level architecture
Martin G. D. Kelleher
Full Text Available The prevalence and severity of tooth wear is increasing in industrialised nations. Yet, there is no high-level evidence to support or refute any therapeutic intervention. In the absence of such evidence, many currently prevailing management strategies for tooth wear may be failing in their duty of care to first and foremost improve the oral health of patients with this disease. This paper promotes biologically sound approaches to the management of tooth wear on the basis of current best evidence of the aetiology and clinical features of this disease. The relative risks and benefits of the varying approaches to managing tooth wear are discussed with reference to long-term follow-up studies. Using reference to ethical standards such as “The Daughter Test”, this paper presents case reports of patients with moderate-to-severe levels of tooth wear managed in line with these biologically sound principles.
Hogan William R
Full Text Available Abstract Background The incorporation of biological knowledge can enhance the analysis of biomedical data. We present a novel method that uses a proteomic knowledge base to enhance the performance of a rule-learning algorithm in identifying putative biomarkers of disease from high-dimensional proteomic mass spectral data. In particular, we use the Empirical Proteomics Ontology Knowledge Base (EPO-KB that contains previously identified and validated proteomic biomarkers to select m/zs in a proteomic dataset prior to analysis to increase performance. Results We show that using EPO-KB as a pre-processing method, specifically selecting all biomarkers found only in the biofluid of the proteomic dataset, reduces the dimensionality by 95% and provides a statistically significantly greater increase in performance over no variable selection and random variable selection. Conclusion Knowledge-based variable selection even with a sparsely-populated resource such as the EPO-KB increases overall performance of rule-learning for disease classification from high-dimensional proteomic mass spectra.
Wu, Ji-Wei; Tseng, Judy C. R.; Hwang, Gwo-Jen
Inquiry-Based Learning (IBL) is an effective approach for promoting active learning. When inquiry-based learning is incorporated into instruction, teachers provide guiding questions for students to actively explore the required knowledge in order to solve the problems. Although the World Wide Web (WWW) is a rich knowledge resource for students to…
Lentz, Leo; Pander Maat, Henk; Sanders, Ted
Purpose: This paper introduces the Knowledge Base Comprehensible Text, a digital resource containing 702 studies on comprehension and usability of text and discourse, published between 1980 and 2010. The paper explains which publications were included in the knowledge base, how they were collected,
There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int
LUAN ShangMin(栾尚敏); DAI GuoZhong(戴国忠); LI Wei(李未)
In this paper, we present a programmable method of revising a finite clause set. We first present a procedure whose formal parameters are a consistent clause set Γ and a clause A and whose output is a set of minimal subsets of Γ which are inconsistent with A. The maximal consistent subsets can be generated from all minimal inconsistent subsets. We develop a prototype system based on the above procedure, and discuss the implementation of knowledge base maintenance. At last, we compare the approach presented in this paper with other related approaches. The main characteristic of the approach is that it can be implemented by a computer program.
Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.
Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.
Preece, Alun; Gomez, Mario; de Mel, Geeth; Vasconcelos, Wamberto; Sleeman, Derek; Colley, Stuart; Pearson, Gavin; Pham, Tien; La Porta, Thomas
Making decisions on how best to utilise limited intelligence, surveillance and reconnaisance (ISR) resources is a key issue in mission planning. This requires judgements about which kinds of available sensors are more or less appropriate for specific ISR tasks in a mission. A methodological approach to addressing this kind of decision problem in the military context is the Missions and Means Framework (MMF), which provides a structured way to analyse a mission in terms of tasks, and assess the effectiveness of various means for accomplishing those tasks. Moreover, the problem can be defined as knowledge-based matchmaking: matching the ISR requirements of tasks to the ISR-providing capabilities of available sensors. In this paper we show how the MMF can be represented formally as an ontology (that is, a specification of a conceptualisation); we also represent knowledge about ISR requirements and sensors, and then use automated reasoning to solve the matchmaking problem. We adopt the Semantic Web approach and the Web Ontology Language (OWL), allowing us to import elements of existing sensor knowledge bases. Our core ontologies use the description logic subset of OWL, providing efficient reasoning. We describe a prototype tool as a proof-of-concept for our approach. We discuss the various kinds of possible sensor-mission matches, both exact and inexact, and how the tool helps mission planners consider alternative choices of sensors.
Sirola, Miki; Lampi, Golan; Parviainen, Jukka
Knowledge-based decision support systems of today are due to development of many decades. More and more methodologies and application areas have been involved during this time. In this paper neural methods are combined with knowledge-based methodologies. Self-Organizing Map (SOM) is used together with rule-based reasoning, and realized in a prototype of a decision support system. This system, which can be used e.g. in fault diagnosis, is based on an earlier study including compatibility analysis. A Matlab-based tool is capable of doing tasks in fault detection and identification. We show with an example how SOM analysis can help decision making in a computerized decision support system. Quantisation error between normal data and error data is one important methodological tool in this analysis. This kind of decision making is needed for instance in control room in state monitoring of a safety critical process in industry. A scenario about a leak in the primary circuit of a BWR nuclear power plant is also shortly demonstrated. (Author)
Friedrichsen, Patricia J.; Abell, Sandra K.; Pareja, Enrique M.; Brown, Patrick L.; Lankford, Deanna M.; Volkmann, Mark J.
Alternative certification programs (ACPs) have been proposed as a viable way to address teacher shortages, yet we know little about how teacher knowledge develops within such programs. The purpose of this study was to investigate prior knowledge for teaching among students entering an ACP, comparing individuals with teaching experience to those…
Mnguni, Lindelani; Abrie, Mia
HIV/AIDS education should empower students to create knowledge using everyday life experiences. Such knowledge should then be used to construe experience and resolve social problems such as risk behaviour that leads to infection. In South Africa, attempts to reduce the spread of HIV include incorporating HIV/AIDS education in the biology…
Full Text Available The tempo and mode of human knowledge expansion is an enduring yet poorly understood topic. Through a temporal network analysis of three decades of discoveries of protein interactions and genetic interactions in baker's yeast, we show that the growth of scientific knowledge is exponential over time and that important subjects tend to be studied earlier. However, expansions of different domains of knowledge are highly heterogeneous and episodic such that the temporal turnover of knowledge hubs is much greater than expected by chance. Familiar subjects are preferentially studied over new subjects, leading to a reduced pace of innovation. While research is increasingly done in teams, the number of discoveries per researcher is greater in smaller teams. These findings reveal collective human behaviors in scientific research and help design better strategies in future knowledge exploration.
Ricks, Wendell R.
The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.
Wei, Ran; Zhang, Xuehu; Ding, Linfang; Ma, Haoming; Li, Qi
Chinese address geocoding is a difficult problem to deal with due to intrinsic complexities in Chinese address systems and a lack of standards in address assignments and usages. In order to improve existing address geocoding algorithm, a spatial knowledge-based agent prototype aimed at validating address geocoding results is built to determine the spatial accuracies as well as matching confidence. A portion of human's knowledge of judging the spatial closeness of two addresses is represented via first order logic and the corresponding algorithms are implemented with the Prolog language. Preliminary tests conducted using addresses matching result in Beijing area showed that the prototype can successfully assess the spatial closeness between the matching address and the query address with 97% accuracy.
The project of modernising Western herbal medicine in order to allow it to be accepted by the public and to contribute to contemporary healthcare is now over two decades old. One aspect of this project involves changes to the ways knowledge about medicinal plants is presented. This paper contrasts the models of Evidence-Based Medicine (EBM) and Traditional Knowledge (TK) to illuminate some of the complexities which have arisen consequent to these changes, particularly with regard to the concept of vitalism, the retention or rejection of which may have broad implications for the clinical practice of herbal medicine. Illustrations from two herbals (central texts on the medicinal use of plants) demonstrate the differences between these frameworks in regard to how herbs are understood. Further, a review of articles on herbal therapeutics published in the Australian Journal of Herbal Medicine indicates that practitioners are moving away from TK and towards the use of EBM in their clinical discussions.
Teresita M. Hogan
Full Text Available Introduction: Emergency care of older adults requires specialized knowledge of their unique physiology, atypical presentations, and care transitions. Older adults often require distinctive assessment, treatment and disposition. Emergency medicine (EM residents should develop expertise and efficiency in geriatric care. Older adults represent over 25% of most emergency department (ED volumes. Yet many EM residencies lack curricula or assessment tools for competent geriatric care. Fully educating residents in emergency geriatric care can demand large amounts of limited conference time. The Geriatric Emergency Medicine Competencies (GEMC are high-impact geriatric topics developed to help residencies efficiently and effectively meet this training demand. This study examines if a 2-hour didactic intervention can significantly improve resident knowledge in 7 key domains as identified by the GEMC across multiple programs. Methods: A validated 29-question didactic test was administered at six EM residencies before and after a GEMC-focused lecture delivered in summer and fall of 2009. We analyzed scores as individual questions and in defined topic domains using a paired student t test. Results: A total of 301 exams were administered; 86 to PGY1, 88 to PGY2, 86 to PGY3, and 41 to PGY4 residents. The testing of didactic knowledge before and after the GEMC educational intervention had high internal reliability (87.9%. The intervention significantly improved scores in all 7 GEMC domains (improvement 13.5% to 34.6%; p<0.001. For all questions, the improvement was 23% (37.8% pre, 60.8% post; P<0.001 Graded increase in geriatric knowledge occurred by PGY year with the greatest improvement post intervention seen at the PGY 3 level (PGY1 19.1% versus PGY3 27.1%. Conclusion: A brief GEMC intervention had a significant impact on EM resident knowledge of critical geriatric topics. Lectures based on the GEMC can be a high-yield tool to enhance resident knowledge of
Menking, D.E.; Goode, M.T. [Army Edgewood Research, Development and Engineering Center, Aberdeen Proving Ground, MD (United States)
Fiber optic evanescent fluorosensors are under investigation in our laboratory for the study of drug-receptor interactions for detection of threat agents and antibody-antigen interactions for detection of biological toxins. In a direct competition assay, antibodies against Cholera toxin, Staphylococcus Enterotoxin B or ricin were noncovalently immobilized on quartz fibers and probed with fluorescein isothiocyanate (FITC) - labeled toxins. In the indirect competition assay, Cholera toxin or Botulinum toxoid A was immobilized onto the fiber, followed by incubation in an antiserum or partially purified anti-toxin IgG. These were then probed with FITC-anti-IgG antibodies. Unlabeled toxins competed with labeled toxins or anti-toxin IgG in a dose dependent manner and the detection of the toxins was in the nanomolar range.
Full Text Available This paper shows that the influence of knowledge on new forms of work organisation can be described as mutual relationships. Different changes in work organisation also have a strong influence on the increasing importance of knowledge of different individual and collective actors in working situations. After that, we characterize a piece of basic formal system, an Extended Fuzzy Logic System (EFLS with temporal attributes, to conceptualize future DKMSs based on human imprecise for distributed just in time decisions. The approximate reasoning is perceived as a derivation of new formulas with the corresponding temporal attributes, within a fuzzy theory defined by the fuzzy set of special axioms. In a management application, the reasoning is evolutionary because of unexpected events which may change the state of the DKMS. In this kind of situations it is necessary to elaborate certain mechanisms in order to maintain the coherence of the obtained conclusions, to figure out their degree of reliability and the time domain for which these are true. These last aspects stand as possible further directions of development at a basic logic level for future technologies that must automate knowledge organizational processes.
Robeva, Raina; Davies, Robin; Hodge, Terrell; Enyedi, Alexander
We describe an ongoing collaborative curriculum materials development project between Sweet Briar College and Western Michigan University, with support from the National Science Foundation. We present a collection of modules under development that can be used in existing mathematics and biology courses, and we address a critical national need to introduce students to mathematical methods beyond the interface of biology with calculus. Based on ongoing research, and designed to use the project-...
Shi, Yongchang; Gao, Wen; Hu, Liang; Fu, Zetian
For improving available and reusable of knowledge in fish disease diagnosis (FDD) domain and facilitating knowledge acquisition, an ontology model of FDD knowledge was developed based on owl according to FDD knowledge model. It includes terminology of terms in FDD knowledge and hierarchies of their class.
Muench, David; Hilsenbeck, Barbara; Kieritz, Hilke; Becker, Stefan; Grosselfinger, Ann-Kristin; Huebner, Wolfgang; Arens, Michael
We are living in a world dependent on sophisticated technical infrastructure. Malicious manipulation of such critical infrastructure poses an enormous threat for all its users. Thus, running a critical infrastructure needs special attention to log the planned maintenance or to detect suspicious events. Towards this end, we present a knowledge-based surveillance approach capable of logging visual observable events in such an environment. The video surveillance modules are based on appearance-based person detection, which further is used to modulate the outcome of generic processing steps such as change detection or skin detection. A relation between the expected scene behavior and the underlying basic video surveillance modules is established. It will be shown that the combination already provides sufficient expressiveness to describe various everyday situations in indoor video surveillance. The whole approach is qualitatively and quantitatively evaluated on a prototypical scenario in a server room.
Tang, Suisheng; Zhang, Zhuo; Tan, Sin Lam;
Estrogen has a profound impact on human physiology affecting transcription of numerous genes. To decipher functional characteristics of estrogen responsive genes, we developed KnowledgeBase for Estrogen Responsive Genes (KBERG). Genes in KBERG were derived from Estrogen Responsive Gene Database...... (ERGDB) and were analyzed from multiple aspects. We explored the possible transcription regulation mechanism by capturing highly conserved promoter motifs across orthologous genes, using promoter regions that cover the range of [-1200, +500] relative to the transcription start sites. The motif detection...... is based on ab initio discovery of common cis-elements from the orthologous gene cluster from human, mouse and rat, thus reflecting a degree of promoter sequence preservation during evolution. The identified motifs are linked to transcription factor binding sites based on the TRANSFAC database. In addition...
Pinnock, Hilary; Holmes, Steve; Levy, Mark L; McArthur, Ruth; Small, Iain
A web-based questionnaire, comprising 11 multiple choice questions, tested the knowledge of visitors to the General Practice Airways Group (GPIAG) online summary of the British Asthma guideline. On average, the 413 respondents answered less than half the questions correctly. GP scores were significantly lower than practice nurses. Improving clinicians' knowledge of asthma is a prerequisite for improving management.
Rybnik, Mariusz; Jastrzebska, Agnieszka
The paper is focused on automated knowledge discovery in musical pieces, based on transformations of digital musical notation. Usually a single musical piece is analyzed, to discover the structure as well as traits of separate voices. Melody and rhythm is processed with the use of three proposed operators, that serve as meta-data. In this work we focus on melody, so the processed data is labeled using fuzzy labels, created for detecting various voice characteristics. A comparative analysis of two musical pieces may be performed as well, that compares them in terms of various rhythmic or melodic traits (as a whole or with voice separation).
Kothari, Cartik R; Payne, Philip R O
In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.
Agarwal, A.; Jairam, B.N.; Emrich, M.L.; Murthy, N.
Management aspects of software development have received little research interest. The SOFTMAN system addresses the automation of this feature of the SDLC. It is a knowledge-based system which tracks the health of a software development effort. By comparing user metrics to past environment standards, anomalies in the coding stage are detected and suggestions for solving them are offered. In addition, SOFTMAN can be used to tutor new personnel, perform what-if anaylsis, and build a corporate memory regarding managment decisions. 15 refs., 2 figs.
Kudenko, Daniel; Grzes, Marek
experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique  uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique  that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data
Based on the Internet technology and artificial int el ligence (AI) technology, this paper presents a dispersed press process knowledge bases based multi-reasoning press process decision system (DKB-MRPPD). The di spersed press process knowledge bases have been organized into case bases and ru le bases, which may be located at different enterprises, and employed to plan pr ess process by a multi-reasoning engine made up of the ART1, case reasoning and rule-based reasoning net. The architecture model of DK...
Kampf, Constance; Kommers, Piet
This call for papers invites papers focused on theoretical frameworks or empirical research which highlights the cultural and/or rhetorical aspects of communicating knowledge in web based communities. We are looking for work that brings together methods and perspectives across disciplines......Cultural and Rhetorical Bases for communicating knowledge in web based communities How can we extend learner-centred theories for educational technology to include, for instance, the cultural and rhetorical backgrounds which influence participants in online communities as they engage in knowledge...... communication processes? To begin to answer this question, we are looking for papers which engage concepts such as: communities of practice (Wenger 1998) the emerging field of knowledge communication the connections between communicating knowledge and discourse structures cultural situatedness of communication...
A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…
Full Text Available Constructing the rational knowledge network is an effective means of cooperative innovation organization. But collaborative innovation organization system is relatively complicated, and knowledge itself is latent and unexpected, knowledge network is not comprehensive and accurate. This paper analyzes deeply knowledge network of organization cooperative innovation, and builds a system dynamics model about knowledge network of collaborative innovation by using the system dynamics model and function of simulation of special drawing. Accorfing to the abovr medel, we can clearly conclude the key factor in the system, and quantitatively reveal the knowledge sharing which is the most important collaborative innovation rule in knowledge organization network.
Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul
monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....
Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen
It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied “basic networks”. A biomedical network is characterized as a spectrum-like vector called “network fingerprint”, which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks. PMID:26307246
Halpern, Joseph Y
Consider a distributed system N in which each agent has an input value and each communication link has a weight. Given a global function, that is, a function f whose value depends on the whole network, the goal is for every agent to eventually compute the value f(N). We call this problem global function computation. Various solutions for instances of this problem, such as Boolean function computation, leader election, (minimum) spanning tree construction, and network determination, have been proposed, each under particular assumptions about what processors know about the system and how this knowledge can be acquired. We give a necessary and sufficient condition for the problem to be solvable that generalizes a number of well-known results. We then provide a knowledge-based (kb) program (like those of Fagin, Halpern, Moses, and Vardi) that solves global function computation whenever possible. Finally, we improve the message overhead inherent in our initial kb program by giving a counterfactual belief-based pro...
Full Text Available As in the traditional enterprise, the performance of the enterprises in theknowledge based society is expressed through the same well-known financialindicators: return on equity, the profit margin, return on assets, gross margin, assetturnover, inventory turnover, the collection period, days’ sales in cash, payableperiod, fixed-asset turnover, balance sheet rations, coverage rations, market valueleverage rations, liquidity ratios, return on invested capital and many others. But,the differences that appear are in the way of acquiring at this performance in theenterprises. The actual knowledge based society is promoting the methods andmodels of the rational management that will lead to performance acquiring by theenterprises. Although as a first step, the reference to financial character as incomestatement, balance sheet, schedules to a balance sheet started to include referencesto the brain capital that is considered the success key in the businesses. In this paperI intend to present the effects on enterprise’ financial performance of the maincomponents of the brain capital: the human capital characterised through theemployees’ competences and skills; organizational capital that defines the internalstructures of the enterprises, inclusively the informatics structure and social capital,related to the enterprise relations with thirds (investors, banks, customers, suppliersetc.. The brain capital mustn’t be looked as a present vogue but as a necessity of itsconsideration and evaluation thus to the old economic-financial rules used indecision making to be added and the knowledge/information decision.
Fei Gao; Achang Ru; Jun Wang; Shiyi Mao
When the classical constant false-alarm rate (CFAR) combined with fuzzy C-means (FCM) algorithm is applied to target detection in synthetic aperture radar (SAR) images with com-plex background, CFAR requires block-by-block estimation of clut-ter models and FCM clustering converges to local optimum. To address these problems, this paper pro-poses a new detection algorithm: knowledge-based combined with improved genetic algorithm-fuzzy C-means (GA-FCM) algorithm. Firstly, the algo-rithm takes target region’s maximum and average intensity, area, length of long axis and long-to-short axis ratio of the external el ipse as factors which influence the target appearing probabil-ity. The knowledge-based detection algorithm can produce pre-process results without the need of estimation of clutter models as CFAR does. Afterward the GA-FCM algorithm is improved to clus-ter pre-process results. It has advantages of incorporating global optimizing ability of GA and local optimizing ability of FCM, which wil further eliminate false alarms and get better results. The ef-fectiveness of the proposed technique is experimental y validated with real SAR images.
This paper uses the descriptive and comparative approaches and uses the OECD (1996) definition of knowledge-based economy, the World Bank Knowledge Index and Knowledge Economy Index and other indicators to examine progress and challenges in transition to knowledge-based economies in Arab Gulf countr
Introduction: Practical strategies are needed to translate research knowledge between researchers and users into action. For effective translation to occur, researchers and users should partner during the research process, recognizing the impact that knowledge, when translated into practice, will have on those most affected by that research.…
This book presents a sample of research on knowledge-based systems in biomedicine and computational life science. The contributions include: · personalized stress diagnosis system · image analysis system for breast cancer diagnosis · analysis of neuronal cell images · structure prediction of protein · relationship between two mental disorders · detection of cardiac abnormalities · holistic medicine based treatment · analysis of life-science data
A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.
Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)
Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.
Taeger, Kelli Rae
Dissection has always played a crucial role in biology and anatomy courses at all levels of education. However, in recent years, ethical concerns, as well as improved technology, have brought to the forefront the issue of whether virtual dissection is as effective or whether it is more effective than traditional dissection. Most prior research indicated the two methods produced equal results. However, none of those studies examined retention of information past the initial test of knowledge. Two groups of college students currently enrolled in an introductory level college biology course were given one hour to complete a frog dissection. One group performed a traditional frog dissection, making cuts in an actual preserved frog specimen with scalpels and scissors. The other group performed a virtual frog dissection, using "The Digital Frog 2" software. Immediately after the dissections were completed, each group was given an examination consisting of questions on actual specimens, pictures generated from the computer software, and illustrations that neither group had seen. Two weeks later, unannounced, the groups took the same exam in order to test retention. The traditional dissection group scored significantly higher on two of the three sections, as well as the total score on the initial exam. However, with the exception of specimen questions (on which the traditional group retained significantly more information), there was no significant difference in the retention from exam 1 to exam 2 between the two groups. These results, along with the majority of prior studies, show that the two methods produce, for the most part, the same end results. Therefore, the decision of which method to employ should be based on the goals and preferences of the instructor(s) and the department. If that department's goals include: Being at the forefront of new technology, increasing time management, increasing student: teacher ratio for economic reasons, and/or ethical issues, then
SONG Hui; MA Fan-yuan; LIU Xiao-qiang
Hidden Web provides great amount of domain-specific data for constructing knowledge services. Most previous knowledge extraction researches ignore the valuable data hidden in Web database, and related works do not refer how to make extracted information available for knowledge system. This paper describes a novel approach to build a domain-specific knowledge service with the data retrieved from Hidden Web. Ontology serves to model the domain knowledge. Queries forms of different Web sites are translated into machine-understandable format, defined knowledge concepts, so that they can be accessed automatically. Also knowledge data are extracted from Web pages and organized in ontology format knowledge. The experiment proves the algorithm achieves high accuracy and the system facilitates constructing knowledge services greatly.
Yeong, Foong May
A surge in the amount of information in the discipline of Cell Biology presents a problem to the teaching of undergraduates under time constraints. In most textbooks and during lectures, students in Singapore are often taught in a dogmatic manner where concepts and ideas are expounded to them. The students in turn passively receive the materials…
Snakes are controversial animals emblazoned by legends, but also endangered as a result of human prejudice and fear. The author investigated gender and age-related differences in attitudes to and knowledge of snakes comparing samples of school children and pre-service teachers. It was found that although pre-service teachers had better knowledge…
JIANG; Tao; LI; Qing-fen; LI; Ming; FU; Wei
A knowledge-based system in structural component design based on fracture mechanics is developed in this paper. The system consists of several functional parts: a general inference engine, a set of knowledge bases and data-bases, an interpretation engine, a bases administration system and the interface. It can simulate a human expert to make analysis and design scheme mainly for four kinds of typical structural components widely used in shipbuilding industry: pressure vessels, huge rotation constructions, pump-rod and welded structures. It is an open system which may be broadened and perfected to cover a wider range of engineering application through the modification and enlargement of knowledge bases and data-bases. It has a natural and friendly interface that may be easily operated. An on-line help service is also provided.
Höfler, Veit; Wessollek, Christine; Karrasch, Pierre
Currently in archaeological studies digital elevation models are mainly used especially in terms of shaded reliefs for the prospection of archaeological sites. Hesse (2010) provides a supporting software tool for the determination of local relief models during the prospection using LiDAR scans. Furthermore the search for relicts from WW2 is also in the focus of his research. In James et al. (2006) the determined contour lines were used to reconstruct locations of archaeological artefacts such as buildings. This study is much more and presents an innovative workflow of determining historical high resolution terrain surfaces using recent high resolution terrain models and sedimentological expert knowledge. Based on archaeological field studies (Franconian Saale near Bad Neustadt in Germany) the sedimentological analyses shows that archaeological interesting horizon and geomorphological expert knowledge in combination with particle size analyses (Koehn, DIN ISO 11277) are useful components for reconstructing surfaces of the early Middle Ages. Furthermore the paper traces how it is possible to use additional information (extracted from a recent digital terrain model) to support the process of determination historical surfaces. Conceptual this research is based on methodology of geomorphometry and geo-statistics. The basic idea is that the working procedure is based on the different input data. One aims at tracking the quantitative data and the other aims at processing the qualitative data. Thus, the first quantitative data were available for further processing, which were later processed with the qualitative data to convert them to historical heights. In the final stage of the workflow all gathered information are stored in a large data matrix for spatial interpolation using the geostatistical method of Kriging. Besides the historical surface, the algorithm also provides a first estimation of accuracy of the modelling. The presented workflow is characterized by a high
Allen, James G.; Sikora, Scott E.
The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.
K. Naresh kumar
Full Text Available In this paper, we propose a proficient method for knowledge management in Edaphology to assist the edaphologists and those related with agriculture in a big way. The proposed method mainly consists two sections of which the first one is to build the knowledge base using XML and the latter part deals with information retrieval by searching using fuzzy. Initially, the relational database is converted to the XML database. The paper discusses two algorithms, one is when the soil characteristics are inputted to have the plant list and in the other, plant names are inputted to have the soil characteristics suited for the plant. While retrieving the query result, the crisp numerical values are converted to fuzzy using the triangular fuzzy membership function and matched to those in database. And those which satisfy are added to the result list and subsequently the frequency is found out to rank the result list so as to obtain the final sorted list. Performance metrics used in order to evaluate the method and compare it to baseline paper are number of plants retrieved, ranking efficiency, and computation time and memory usage. Results obtained proved the validity of the method and the method obtained average computation time of 0.102 seconds and average memory usage of 2486 Kb, which all are far better than the previous method results.
Shun-Liang CAO; Lei QIN; Wei-Zhong HE; Yang ZHONG; Yang-Yong ZHU; Yi-Xue LI
Semantic search is a key issue in integration of heterogeneous biological databases. In thispaper, we present a methodology for implementing semantic search in BioDW, an integrated biological datawarehouse. Two tables are presented: the DB2GO table to correlate Gene Ontology (GO) annotated entriesfrom BioDW data sources with GO, and the semantic similarity table to record similarity scores derived fromany pair of GO terms. Based on the two tables, multifarious ways for semantic search are provided and thecorresponding entries in heterogeneous biological databases in semantic terms can be expediently searched.
LUAN ShangMin; DAI GuoZhong
One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowl- edge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the con- sistency check can also be implemented by algebraic approach. Furthermore, al- gorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.
Aronson Alan R
Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well
Del Fiol, Guilherme; Cimino, James J; Maviglia, Saverio M; Strasberg, Howard R; Jackson, Brian R; Hulse, Nathan C
Online health knowledge resources can be integrated into electronic health record systems using decision support tools known as “infobuttons.” In this study we describe a knowledge management method based on the analysis of knowledge resource use via infobuttons in multiple institutions. Methods: We conducted a two-phase analysis of laboratory test infobutton sessions at three healthcare institutions accessing two knowledge resources. The primary study measure was session coverage, i.e. the rate of infobutton sessions in which resources retrieved relevant content. Results: In Phase One, resources covered 78.5% of the study sessions. In addition, a subset of 38 noncovered tests that most frequently raised questions was identified. In Phase Two, content development guided by the outcomes of Phase One resulted in a 4% average coverage increase. Conclusion: The described method is a valuable approach to large-scale knowledge management in rapidly changing domains. PMID:21346957
Theoretical part: Basic terms of knowledge management, knowledge worker, knowledge creation and conversion process, prerequisites and benefits of knowledge management. Knowledge management and it's connection to organizational culture and structure, result measurements of knowledge management, learning organization and it's connection to knowledge management. Tacit knowledge management tools -- stories -- types, how to create, practical use, communities, coaching. Value Based Organization. Pr...
Spraggins Thomas A
Corsale, Kathleen; Gitomer, Drew
Developmental and individual differences in mathematical aptitude were investigated as a function of knowledge structure and processing variables. Results indicated the relative importance of knowledge structure and strategy skills in aptitude test performance. Protocol data further elaborated the interrelationships between knowledge and process…
Do, Quang Xuan
In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…
Theofilatos, Konstantinos A.
Proteins and their interactions are considered to play a significant role in many cellular processes. The identification of Protein-Protein interactions (PPIs) in human is an open research area. Many Databases, which contain information about experimentally and computationally detected human PPIs as well as their corresponding annotation data, have been developed. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://184.108.40.206:84/Default.aspx) which is a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, estimates a set of features of interest and computes a confidence score for every candidate protein interaction using a modern computational hybrid methodology. © 2012 IFIP International Federation for Information Processing.
Hvam, Lars; Malis, Martin
How can complex product models be documented in a formalised way that consider both development and maintenance? The need for an effective documentation tool has emerged in order to document the development of product models. The product models have become more and more complex and comprehensive....... with the development of a Lotus Notes application that serves as a knowledge based documentation tool for configuration projects. A prototype has been developed and tested empirically in an industrial case-company. It has proved to be a succes.......How can complex product models be documented in a formalised way that consider both development and maintenance? The need for an effective documentation tool has emerged in order to document the development of product models. The product models have become more and more complex and comprehensive...
Andreasen, Troels; Styltsvig, Henrik Bulskov; Jensen, Per Anker;
to semantic querying. Our core natural logic proposal covers formal ontologies and generative extensions thereof. It further provides means of expressing general relationships between classes in an application. We discuss extensions of the core natural logic with various conservative as well as non......-conservative constructs in order to approach scientific use of natural language. Finally, we outline a prototype system addressing life science for the natural logic knowledge base setup being under continuous development.......We describe a natural logic for computational reasoning with a regimented fragment of natural language. The natural logic comes with intuitive inference rules enabling deductions and with an internal graph representation facilitating conceptual path finding between pairs of terms as an approach...
Oh, Hyo-Jung; Yun, Bo-Hyun
This paper presents a knowledge acquisition method using sentence topics for question answering. We define templates for information extraction by the Korean concept network semi-automatically. Moreover, we propose the two-phase information extraction model by the hybrid machine learning such as maximum entropy and conditional random fields. In our experiments, we examined the role of sentence topics in the template-filling task for information extraction. Our experimental result shows the improvement of 18% in F-score and 434% in training speed over the plain CRF-based method for the extraction task. In addition, our result shows the improvement of 8% in F-score for the subsequent QA task.
Full Text Available In order to solve the problem due to the vehicle-oriented society such as traffic jam or traffic accident, intelligent transportation system(ITS is raised and become scientist’s research focus, with the purpose of giving people better and safer driving condition and assistance. The core of intelligent transport system is the vehicle recognition and detection, and it’s the prerequisites for other related problems. Many existing vehicle recognition algorithms are aiming at one specific direction perspective, mostly front/back and side view. To make the algorithm more robust, our paper raised a vehicle recognition algorithm for oblique vehicles while also do research on front/back and side ones. The algorithm is designed based on the common knowledge of the car, such as shape, structure and so on. The experimental results of many car images show that our method has fine accuracy in car recognition.
ZENG Chuan-hua; PEI Zheng; XU Yang
To make decisions about event series is part of our life, and to discover knowledge from these decisions is of great significance in the field of controlling and decision-making.The paper takes event series as the exterior form of movements with the dynamic attributes, and gets the Markov transition probabilities matrix to express those attributes with statistics. First, according to the matrix,the decision table is constructed. Then, by reducing attributes based on rough set theory, the decision table is reduced, and the decision rules are acquired as well. Finally we make the decision through defining rule distance and taking the minimum rule distance as decision principle.Followed is an example, which proves that the algorithm is feasible and effective to the event series decision.
Mawussi, Kwamiwi; 10.1016/j.cie.2011.02.016
Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and topological relations are aggregated to a geometric feature in order to create machining features. Technological data, such as material, surface roughness and form tolerance are defined during forging process and dies design. These data are used to choose cutting tools and machining strategies. Topological relations define relative positions between the surfaces of the die CAD model. After machining features identification cutting tools and machining strategies currently used in HSM of forging die, are associated to them in order to generate mac...
Alexandra Teodora RUGINOSU
Full Text Available Knowledgebased organizations means continuous learning, performance and networking. People’s development depends on their lifelong learning. Mentoring combines the need of development and performance of individuals with the organizational ones. Organizations nowadays face difficulties in recruiting and retaining qualified employees. The work force migration is a phenomenon they have to fight constantly. Employees are being faithful to companies that give them an environment suitable for development: supportive, safe, non-judgmental and comfortable. Teamwork and trust in the co-workers enables employees to show their true potential and trial with no fear. This kind of environment can be created through a mentoring program. This paper highlights the importance of mentoring in the knowledge based organizations management. Mentoring helps staff insertion, development and succession planning, increases employee’s motivation and talent retention and promotes organizational culture. This study presents the benefits and drawbacks that mentoring brings to organizations and employees.
Molinaro, Marco; Bandieramonte, Marilena; Becciani, Ugo; Brescia, Massimo; Cavuoti, Stefano; Costa, Alessandro; Di Giorgio, Anna M; Elia, Davide; Hajnal, Akos; Gabor, Hermann; Kacsuk, Peter; Liu, Scige J; Molinari, Sergio; Riccio, Giuseppe; Schisano, Eugenio; Sciacca, Eva; Smareglia, Riccardo; Vitello, Fabio
The VIALACTEA project has a work package dedicated to Tools and Infrastructure and, inside it, a task for the Database and Virtual Observatory Infrastructure. This task aims at providing an infrastructure to store all the resources needed by the, more purposely, scientific work packages of the project itself. This infrastructure includes a combination of: storage facilities, relational databases and web services on top of them, and has taken, as a whole, the name of VIALACTEA Knowledge Base (VLKB). This contribution illustrates the current status of this VLKB. It details the set of data resources put together; describes the database that allows data discovery through VO inspired metadata maintenance; illustrates the discovery, cutout and access services built on top of the former two for the users to exploit the data content.
Remtulla, Karim A.
The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the…
Chew, Peter A.
One of the challenges increasingly facing intelligence analysts, along with professionals in many other fields, is the vast amount of data which needs to be reviewed and converted into meaningful information, and ultimately into rational, wise decisions by policy makers. The advent of the world wide web (WWW) has magnified this challenge. A key hypothesis which has guided us is that threats come from ideas (or ideology), and ideas are almost always put into writing before the threats materialize. While in the past the 'writing' might have taken the form of pamphlets or books, today's medium of choice is the WWW, precisely because it is a decentralized, flexible, and low-cost method of reaching a wide audience. However, a factor which complicates matters for the analyst is that material published on the WWW may be in any of a large number of languages. In 'Identification of Threats Using Linguistics-Based Knowledge Extraction', we have sought to use Latent Semantic Analysis (LSA) and other similar text analysis techniques to map documents from the WWW, in whatever language they were originally written, to a common language-independent vector-based representation. This then opens up a number of possibilities. First, similar documents can be found across language boundaries. Secondly, a set of documents in multiple languages can be visualized in a graphical representation. These alone offer potentially useful tools and capabilities to the intelligence analyst whose knowledge of foreign languages may be limited. Finally, we can test the over-arching hypothesis--that ideology, and more specifically ideology which represents a threat, can be detected solely from the words which express the ideology--by using the vector-based representation of documents to predict additional features (such as the ideology) within a framework based on supervised learning. In this report, we present the results of a three-year project of the same name. We believe
Corpus-based analysis is adopted to study the acceptable noun-verb collocation of the word knowledge. Verbs like ac-quire, have etc. are found to be frequently collocate with knowledge, and the Chinese students’favorite patterns like learn knowledge, enlarge knowledge are not acceptable. The finding may encourage teachers to consider about the pedagogical value of corpus while teaching languages.
Jansen, Amanda; Bartell, Tonya; Berk, Dawn
In this article, we describe features of learning goals that enable indexing knowledge for teacher education. Learning goals are the key enabler for building a knowledge base for teacher education; they define what counts as essential knowledge for prospective teachers. We argue that 2 characteristics of learning goals support knowledge-building…
Hatak, Isabella; Roessl, Dietmar
This article discusses the challenges of knowledge management within intrafamily succession against the background of the knowledge-based view. As a knowledge transfer is crucial for a successful business continuation, factors that promote the interpersonal knowledge transfer are identified. Since t
Bates, Maxwell; Berliner, Aaron J; Lachoff, Joe; Jaschke, Paul R; Groban, Eli S
Wet Lab Accelerator (WLA) is a cloud-based tool that allows a scientist to conduct biology via robotic control without the need for any programming knowledge. A drag and drop interface provides a convenient and user-friendly method of generating biological protocols. Graphically developed protocols are turned into programmatic instruction lists required to conduct experiments at the cloud laboratory Transcriptic. Prior to the development of WLA, biologists were required to write in a programming language called "Autoprotocol" in order to work with Transcriptic. WLA relies on a new abstraction layer we call "Omniprotocol" to convert the graphical experimental description into lower level Autoprotocol language, which then directs robots at Transcriptic. While WLA has only been tested at Transcriptic, the conversion of graphically laid out experimental steps into Autoprotocol is generic, allowing extension of WLA into other cloud laboratories in the future. WLA hopes to democratize biology by bringing automation to general biologists.
Remus Mircea Sabau
Full Text Available The management improvement in the educational public institutions from Romania owns an undeniable priority in each state government. The necessity of the changes in institutional management related to the administrative system and, mainly, to the management of the intellectual capital defines the utility and the efficiency of meeting the necessities in the respective educational environment and sets as one of the main problems, recently emerged, the elaboration of a number of structural-organisational and functional-operational measures, stimulating management modernization at an educational level, including the higher education institutions, with a main reference to the intellectual capital. In the national literature, these problems are not highly scientifically examined. In my opinion, in Romania, at present there isnn#8217;t any well-defined framework in connection with the management of the intellectual capital from the educational institutions. Thatn#8217;s why I considered that this isnn#8217;t just an actual topic but it is also a central subject. In#8217;m convinced that a more thorough examination of the intellectual capital will be able to contribute on a long and a medium term to the administrative growth and development. The intellectual capital has a key role in the international relations development and triggers, in my opinion, radical structural changes, which are very important for the amplification of the educational institutions value. The ability to create, use and increase the intellectual capital value is, in my opinion, the foundation on which it is based the public management of educational institutions in a country, the welfare and the life quality for her citizens. The knowledge valorification is a long process, which doesnn#8217;t offer rapid results. But the effects of promoting and propagating of knowledge are, firstly, on a qualitative structural level and can lead to beneficial effects on long term. In my opinion
Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine
The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.
Green, Lawrence L.
Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.
Tramontana, G. Michael; Blood, Ingrid M.; Blood, Gordon W.
The purpose of this study was to determine (a) the general knowledge bases demonstrated by school-based speech-language pathologists (SLPs) in the area of genetics, (b) the confidence levels of SLPs in providing services to children and their families with genetic disorders/syndromes, (c) the attitudes of SLPs regarding genetics and communication…
Ma Cong; Wang Zuojian; Liu Mingye
This paper studies the linkage problem between the result of high-level synthesis and back-end technology, presents a method of high-level technology mapping based on knowl edge, and studies deeply all of its important links such as knowledge representation, knowledge utility and knowledge acquisition. It includes: (1) present a kind of expanded production about knowledge of circuit structure; (2) present a VHDL-based method to acquire knowledge of tech nology mapping; (3) provide solution control strategy and algorithm of knowledge utility; (4)present a half-automatic maintenance method, which can find redundance and contradiction of knowledge base; (5) present a practical method to embed the algorithm into knowledge system to decrease complexity of knowledge base. A system has been developed and linked with three kinds of technologies, so verified the work of this paper.
Full Text Available The utilization of dental implants became a scientifically accepted treatment modality for the rehabilitation of fully and partially edentulous patients. The evolution of dental implants has completely changed dentistry. Implants can offer a number of benefits, from improved esthetics, to reducing bone loss, to improving denture retention for edentulous patients. Branemark et al., was the first person to examined submerged titanium implants with a machined surface in dogs and later called this procedure as osseointegration, which is now defined as "A direct structural and functional connection between ordered, living bone and the surface of a load-bearing implant." Commercially pure titanium is recognized today as a material of choice, since it is characterized by excellent biological and also good mechanical properties. In this comprehensive review, authors have sought to explore various biological aspects of dental implant as pertinent to clinical procedure so as to provide research foundation for the establishment of suitable strategies that can assist in successful implant therapy.
Serrano, José; Puupponen-Pimiä, Riitta; Dauer, Andreas; Aura, Anna-Marja; Saura-Calixto, Fulgencio
Tannins are a unique group of phenolic metabolites with molecular weights between 500 and 30 000 Da, which are widely distributed in almost all plant foods and beverages. Proanthocyanidins and hydrolysable tannins are the two major groups of these bioactive compounds, but complex tannins containing structural elements of both groups and specific tannins in marine brown algae have also been described. Most literature data on food tannins refer only to oligomeric compounds that are extracted with aqueous-organic solvents, but a significant number of non-extractable tannins are usually not mentioned in the literature. The biological effects of tannins usually depend on their grade of polymerisation and solubility. Highly polymerised tannins exhibit low bioaccessibility in the small intestine and low fermentability by colonic microflora. This review summarises a new approach to analysis of extractable and non-extractable tannins, major food sources, and effects of storage and processing on tannin content and bioavailability. Biological properties such as antioxidant, antimicrobial and antiviral effects are also described. In addition, the role of tannins in diabetes mellitus has been discussed.
Pessemier, Wim; Raskin, Gert; Saey, Philippe; Van Winckel, Hans; Deconinck, Geert
As the new control system of the Mercator Telescope is being finalized, we can review some technologies and design methodologies that are advantageous, despite their relative uncommonness in astronomical instrumentation. Particular for the Mercator Telescope is that it is controlled by a single high-end soft-PLC (Programmable Logic Controller). Using off-the-shelf components only, our distributed embedded system controls all subsystems of the telescope such as the pneumatic primary mirror support, the hydrostatic bearing, the telescope axes, the dome, the safety system, and so on. We show how real-time application logic can be written conveniently in typical PLC languages (IEC 61131-3) and in C++ (to implement the pointing kernel) using the commercial TwinCAT 3 programming environment. This software processes the inputs and outputs of the distributed system in real-time via an observatory-wide EtherCAT network, which is synchronized with high precision to an IEEE 1588 (PTP, Precision Time Protocol) time reference clock. Taking full advantage of the ability of soft-PLCs to run both real-time and non real-time software, the same device also hosts the most important user interfaces (HMIs or Human Machine Interfaces) and communication servers (OPC UA for process data, FTP for XML configuration data, and VNC for remote control). To manage the complexity of the system and to streamline the development process, we show how most of the software, electronics and systems engineering aspects of the control system have been modeled as a set of scripts written in a Domain Specific Language (DSL). When executed, these scripts populate a Knowledge Base (KB) which can be queried to retrieve specific information. By feeding the results of those queries to a template system, we were able to generate very detailed "browsable" web-based documentation about the system, but also PLC software code, Python client code, model verification reports, etc. The aim of this paper is to
Bajić-Brković Milica V.
Full Text Available The application of web-based technologies in developing the knowledge network for planning and development is the topic of this paper. Despite the fact that the web phenomenon is relatively new in the profession and not yet entirely explored, there is evidence which suggests that e-services are amongst the most rapidly growing sectors in the profession today. Numerous e-technologies for planning purposes have already been developed, and often fully integrated into the planning practice. This paper explores the state of the art in the field, and discusses the way the e-based alternative could be utilized in everyday planning practice. At the outset, the existing know-how is presented, followed by the assessment of the tools against the principles of a good planning practice. The challenges to the alternative are highlighted in the last section, and debated vis-à-vis the observed benefits. Implications for concrete planning practice are at the heart of the overall discussion.
The traditional concept of "one-size-fits-all" educational and training programmes is no more fully adequate to meet the increasing demand worldwide. E-learning, as an alternative approach to traditional face-toface education, is creating immense challenges for educational institutions to develop new approaches for the production and delivery of cost effective and efficient e-contents. Although, there have been many developments in web-based programmes, they have not fully attained their potential due to a variety of factors. These include:1 ) lack of exchangeability between learning materials, 2) delivery mechanisms incompatible with the pedagogical design, 3) low student interaction and insensitive learning processes, 4) absence of intelligent online programme advice and guidance, 5) inflexibility in meeting diverse needs, and 6) institutionally centred ineffective implementation strategies. This paper addresses the critical elements for successful delivery of e-learning environments and then focuses on proposing a framework for the development of an integrated knowledge-based learning environment which has the potential to producer cost effective and personalised training programmes.
The paper deals with the learning and innovation strategies of manufacturing companies in the economies of transformation. The point of departure is the development of a theoretical framework dealing innovation, knowledge and learning. The case is two manufacturing companies in Poland, the learni...
2 or 3 hours of continued knowledge acquisition. (Hall and -S Bandler , 1985:509-510; Tanimoto, 1987:286) Although the knowledge engineer should guide...Department of Research and Information, 1 January 1987. 13. Hall, Lawrence 0. and Wyllis Bandler . "Relational Knowledge Acqui- sition," IEEE Second...Corporation. Acquisition Expert System (AES) Tutorial. Washington, DC: Navy Materiel Command, April 1985. 34. Walters, Richard C. "KBEmacs: Where’s the AI
Röll, Natalie; Stork, Wilhelm; Rosales, Bruno; Stephan, René; Knaup, Petra
Manufacturer information, user experiences and product availability of assistive living technologies are usually not known to citizens or consultation centers. The different knowledge levels concerning the availability of technology shows the need for building up a knowledge base. The aim of this contribution is the definition of requirements in the development of knowledge bases for AAL consultations. The major requirements, such as a maintainable and easy to use structure were implemented into a web based knowledge base, which went productive in ~3700 consulting interviews of municipal technology information centers. Within this field phase the implementation of the requirements for a knowledge base in the field of AAL consulting was evaluated and further developed.
Full Text Available As platform based on users’ relationship to acquire, share, and propagate knowledge, Wechat develops very rapidly and becomes an important channel to spread knowledge. This new way to propagate knowledge is quite different from the traditional media way which enables knowledge to be spread surprisingly in Wechat. Based on complex network theory and the analysis of the factors which influence the knowledge propagation in Wechat, this paper summarizes the behavior preferences of Wechat users in knowledge propagation and establishes a Wechat knowledge propagation model. By the simulation experiment, this paper tests the model established and finds some important thresholds in knowledge propagation in Wechat. The findings are valuable for further studying the knowledge propagation in Wechat and provide theoretical proof for forecasting the scale and influence of knowledge propagation.
Full Text Available BACKGROUND: Major depressive disorder (MDD is a complex neuropsychiatric syndrome with high heterogeneity. There are different levels of biological components that underlie MDD and interact with each other. To uncover the disease mechanism, large numbers of studies at different levels have been conducted. There is a growing need to integrate data from multiple levels of research into a database to provide a systematic review of current research results. The cross level integration will also help bridge gaps of different research levels for further understanding on MDD. So far, there has been no such effort for MDD. DESCRIPTIONS: We offer researchers a Multi-level Knowledge base for MDD (MK4MDD to study the interesting interplay of components in the pathophysiological cascade of MDD from genetic variations to diagnostic syndrome. MK4MDD contains 2,341 components and 5,206 relationships between components based on reported experimental results obtained by diligent literature reading with manual curation. All components were well classified with careful curation and supplementary annotation. The powerful search and visualization tools make all data in MK4MDD form a cross-linked network to be applied to a broad range of both basic and applied research. CONCLUSIONS: MK4MDD aims to provide researchers with a central knowledge base and analysis platform for MDD etiological and pathophysiological mechanisms research. MK4MDD is freely available at http://mdd.psych.ac.cn.
Farit M. Afendi
Full Text Available Molecular biological data has rapidly increased with the recent progress of the Omics fields, e.g., genomics, transcriptomics, proteomics and metabolomics that necessitates the development of databases and methods for efficient storage, retrieval, integration and analysis of massive data. The present study reviews the usage of KNApSAcK Family DB in metabolomics and related area, discusses several statistical methods for handling multivariate data and shows their application on Indonesian blended herbal medicines (Jamu as a case study. Exploration using Biplot reveals many plants are rarely utilized while some plants are highly utilized toward specific efficacy. Furthermore, the ingredients of Jamu formulas are modeled using Partial Least Squares Discriminant Analysis (PLS-DA in order to predict their efficacy. The plants used in each Jamu medicine served as the predictors, whereas the efficacy of each Jamu provided the responses. This model produces 71.6% correct classification in predicting efficacy. Permutation test then is used to determine plants that serve as main ingredients in Jamu formula by evaluating the significance of the PLS-DA coefficients. Next, in order to explain the role of plants that serve as main ingredients in Jamu medicines, information of pharmacological activity of the plants is added to the predictor block. Then N-PLS-DA model, multiway version of PLS-DA, is utilized to handle the three-dimensional array of the predictor block. The resulting N-PLS-DA model reveals that the effects of some pharmacological activities are specific for certain efficacy and the other activities are diverse toward many efficacies. Mathematical modeling introduced in the present study can be utilized in global analysis of big data targeting to reveal the underlying biology.
Farit M. Afendi
Full Text Available Molecular biological data has rapidly increased with the recent progress of the Omics fields, e.g., genomics, transcriptomics, proteomics and metabolomics that necessitates the development of databases and methods for efficient storage, retrieval, integration and analysis of massive data. The present study reviews the usage of KNApSAcK Family DB in metabolomics and related area, discusses several statistical methods for handling multivariate data and shows their application on Indonesian blended herbal medicines (Jamu as a case study. Exploration using Biplot reveals many plants are rarely utilized while some plants are highly utilized toward specific efficacy. Furthermore, the ingredients of Jamu formulas are modeled using Partial Least Squares Discriminant Analysis (PLS-DA in order to predict their efficacy. The plants used in each Jamu medicine served as the predictors, whereas the efficacy of each Jamu provided the responses. This model produces 71.6% correct classification in predicting efficacy. Permutation test then is used to determine plants that serve as main ingredients in Jamu formula by evaluating the significance of the PLS-DA coefficients. Next, in order to explain the role of plants that serve as main ingredients in Jamu medicines, information of pharmacological activity of the plants is added to the predictor block. Then N-PLS-DA model, multiway version of PLS-DA, is utilized to handle the three-dimensional array of the predictor block. The resulting N-PLS-DA model reveals that the effects of some pharmacological activities are specific for certain efficacy and the other activities are diverse toward many efficacies. Mathematical modeling introduced in the present study can be utilized in global analysis of big data targeting to reveal the underlying biology.
Full Text Available The goal of Open Knowledge Maps is to create a visual interface to the world’s scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps. Das Ziel von Open Knowledge Map ist es, ein visuelles Interface zum wissenschaftlichen Wissen der Welt bereitzustellen. Die Basis für die dieses Interface sind sogenannte “knowledge maps”, zu deutsch Wissenslandkarten. Wissenslandkarten ermöglichen die Exploration bestehenden Wissens und die Entdeckung neuen Wissens. Unsere Open Source Software wendet für die Erstellung der Wissenslandkarten eine Reihe von Text Mining Verfahren iterativ auf die Metadaten wissenschaftlicher Artikel an. Die daraus resultierende Repräsentation wird in einer Datenbank für die Anzeige in einer Web-Visualisierung abgespeichert. In Zukunft wollen wir einen Raum für das kollektive Erstellen von Wissenslandkarten schaffen, der die Personen und Communities, welche sich mit der Exploration und Entdeckung wissenschaftlichen Wissens beschäftigen, zusammenbringt. Wir wollen es den NutzerInnen ermöglichen, einander in der Literatursuche durch kollaboratives Annotieren und Modifizieren von automatisch erstellten Wissenslandkarten zu unterstützen.
Liaw, Shu-Sheng; Chen, Gwo-Dong; Huang, Hsiu-Mei
The Web-based technology is a potential tool for supported collaborative learning that may enrich learning performance, such as individual knowledge construction or group knowledge sharing. Thus, understanding Web-based collaborative learning for knowledge management is a critical issue. The present study is to investigate learners' attitudes…
Reports a mismatch between teacher and pupil knowledge of acid-base chemistry as a result of controversial episodes from three science lessons. Suggests that the teacher's knowledge is guided by textbook information while the pupil's knowledge is based on direct experimental experience. Proposes that classroom activities should support the…
In previous researches on a model-based diagnostic system, the components are assumed mutually independent. Howerver , the assumption is not always the case because the information about whether a component is faulty or not usually influences our knowledge about other components. Some experts may draw such a conclusion that "if component m1 is faulty, then component m2 may be faulty too". How can we use this experts' knowledge to aid the diagnosis? Based on Kohlas's probabilistic assumption-based reasoning method, we use Bayes networks to solve this problem. We calculate the posterior fault probability of the components in the observation state. The result is reasonable and reflects the effectiveness of the experts' knowledge.
Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals--both young and old--encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in…
Crowder, R. M.; Zauner, K.-P.
The design of any robotic system requires input from engineers from a variety of technical fields. This paper describes a project-based module, "Biologically-Inspired Robotics," that is offered to Electronics and Computer Science students at the University of Southampton, U.K. The overall objective of the module is for student groups to…
Full Text Available
ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.
AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.
Lagos, L.; Upadhyay, H.; Shoffner, P. [Applied Research Center, Florida International University, 10555 W. Flagler Street,EC2100, Miami, FL (United States)
Deactivation and decommissioning (D and D) work is a high risk and technically challenging enterprise within the U.S. Department of Energy complex. During the past three decades, the DOE's Office of Environmental Management has been in charge of carrying out one of the largest environmental restoration efforts in the world: the cleanup of the Manhattan Project legacy. In today's corporate world, worker experiences and knowledge that have developed over time represent a valuable corporate asset. The ever-dynamic workplace, coupled with an aging workforce, presents corporations with the ongoing challenge of preserving work-related experiences and knowledge for cross-generational knowledge transfer to the future workforce . To prevent the D and D knowledge base and expertise from being lost over time, the DOE and the Applied Research Center at Florida International University (FIU) have developed the web-based Knowledge Management Information Tool (KM-IT) to capture and maintain this valuable information in a universally available and easily accessible and usable system. The D and D KM-IT was developed in collaboration with DOE Headquarters (HQ), the Energy Facility Contractors Group (EFCOG), and the ALARA [as low as reasonably achievable] Centers at Savannah River Sites to preserve the D and D information generated and collected by the D and D community. This is an open secured system that can be accessed from https://www.dndkm.org over the web and through mobile devices at https://m.dndkm.org. This knowledge system serves as a centralized repository and provides a common interface for D and D-related activities. It also improves efficiency by reducing the need to rediscover knowledge and promotes the reuse of existing knowledge. It is a community-driven system that facilitates the gathering, analyzing, storing, and sharing of knowledge and information within the D and D community. It assists the DOE D and D community in identifying potential solutions
Full Text Available In the current context of economic globalization and the advent of the virtual business environment, organizations have registered profound transformations that force companies to reconsider their strategic objectives, especially taking into consideration the opportunities created by the new information and communication technologies. Regardless of their reactive or proactive strategies when facing the changes in the competition, most companies in the developed countries and more and more of the Romanian enterprises are interested in developing technologies and information systems at a intra, inter and extra organizational level, with integrated traits, which are capable to sustain both the managerial process and the traditional functions of the organization. That being said, we herald now the expansion of the electronic commerce or eCommerce, which represents the automatization of the commercial transaction by using information systems and communication technologies. Developing an eCommerce system based on a business-to-business application consists of de-structuring the chain of value in managerial processes and then re-structuring it in order to identify the areas that can be made efficient through electronic means. This study is meant to aid the development of existing models by developing the services in certain less accessible to electronic commerce areas of a knowledge-based economy. As it stands, electronic commerce offers the opportunity of selling products world wide and this increasing the number of potential clients by eliminating the geographical barriers between buyers and seller. Opting for electronic commerce is a solution when the company wants to diversify its services and when it wants to reduce market related costs.
Li, Zengyang; Liang, Peng; Avgeriou, Paris
Context: Knowledge management technologies have been employed across software engineering activities for more than two decades. Knowledge-based approaches can be used to facilitate software architecting activities (e.g., architectural evaluation). However, there is no comprehensive understanding on
We present a description of the PhD thesis which aims to propose a rule-based query answering method for relational data. In this approach we use an additional knowledge which is represented as a set of rules and describes the source data at concept (ontological) level. Queries are posed in the terms of abstract level. We present two methods. The first one uses hybrid reasoning and the second one exploits only forward chaining. These two methods are demonstrated by the prototypical implementation of the system coupled with the Jess engine. Tests are performed on the knowledge base of the selected economic crimes: fraudulent disbursement and money laundering.
Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based ...
Arruda, Wosley C; Souza, Daniel S; Ralha, Célia G; Walter, Maria Emilia M T; Raiol, Tainá; Brigido, Marcelo M; Stadler, Peter F
Noncoding RNAs (ncRNAs) have been focus of intense research over the last few years. Since characteristics and signals of ncRNAs are not entirely known, researchers use different computational tools together with their biological knowledge to predict putative ncRNAs. In this context, this work presents ncRNA-Agents, a multi-agent system to annotate ncRNAs based on the output of different tools, using inference rules to simulate biologists' reasoning. Experiments with data from the fungus Saccharomyces cerevisiae allowed to measure the performance of ncRNA-Agents, with better sensibility, when compared to Infernal, a widely used tool for annotating ncRNA. Besides, data of the Schizosaccharomyces pombe and Paracoccidioides brasiliensis fungi identified novel putative ncRNAs, which demonstrated the usefulness of our approach. NcRNA-Agents can be be found at: http://www.biomol.unb.br/ncrna-agents.
Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).
Groleau, Nick; Grymes, Rosalind A.; Alizadeh, Babak; Friedland, Peter (Technical Monitor)
One of the most significant new opportunities that the Space Station affords cell biologists is the ability to do long-term cultivation of cells in the space environment. This facility is essential for investigations that are primarily focused on effects requiring a longer timeline of observation than that provided by the STS (Space Transportation System) platform. Such work requires both very strong laboratory skills to properly and quickly interact with the hardware hosting the culture and deep knowledge of the cell biology domain in order to optimally react to unanticipated scientific developments. Such work can be enabled by advanced automation techniques that have recently been used in the STS-based Spacelab, and that are being readied for the Space Station. In this paper, we describe the adaptation of PI-in-a-Box, the first interactive space science assistant system, to the study of the effects of space flight on cell cycle progression and proliferation.
Molnár, László; Vágó, István; Fehér, András
A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.
d'Espaux, Leo; Mendez-Perez, Daniel; Li, Rachel; Keasling, Jay D
The risks of maintaining current CO2 emission trends have led to interest in producing biofuels using engineered microbes. Microbial biofuels reduce emissions because CO2 produced by fuel combustion is offset by CO2 captured by growing biomass, which is later used as feedstock for biofuel fermentation. Hydrocarbons found in petroleum fuels share striking similarity with biological lipids. Here we review synthetic metabolic pathways based on fatty acid and isoprenoid metabolism to produce alkanes and other molecules suitable as biofuels. We further discuss engineering strategies to optimize engineered biosynthetic routes, as well as the potential of synthetic biology for sustainable manufacturing.
Abstract Against the background of claims made about the emergence of a new Knowledge-based Economy, I explore the role of knowledge, learning and innovation in the economy and in relation to regional economic development and to successive conceptions of regional development policies through the lens of the successive transformations of a particular regional economy ? that of north east England. Rather than seeing knowledge as something that has only recently become relevant to e...
Jüttner, Melanie; Neuhaus, Birgit J.
In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement test about the knee-jerk in biology, which were analysed by using the inductive qualitative analysis of their content. The empirical data were used for the development of the items in the PCK test. The validation of the items was determined with think-aloud interviews of German secondary school teachers (n = 5). If the item was determined, the reliability was tested by the results of German secondary school biology teachers (n = 65) who took the PCK test. The results indicated that these items are satisfactorily reliable (Cronbach's alpha values ranged from 0.60 to 0.65). We suggest a larger sample size and American biology teachers be used in our further studies. The findings of this study about teachers' professional knowledge from the PCK test could provide new information about the influence of teachers' knowledge on their pupils' understanding of biology and their possible errors in learning biology.
"Challenging, theoretically rich yet anchored in detailed empirical analysis, Loet Leydesdorff's exploration of the dynamics of the knowledge-economy is a major contribution to the field. Drawing on his expertise in science and technology studies, systems theory, and his internationally respected work on the 'triple helix', the book provides a radically new modelling and simulation of knowledge systems, capturing the articulation of structure, communication, and agency therein. This work will be of immense interest to both theorists of the knowledge-economy and practitioners in science policy." Andrew Webster Science & Technology Studies, University of York, UK
Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I
Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation.
Bélanger, Julie; Johns, Timothy
Human and ecosystem health converge around biological diversity issues. Cultivated and wild plants as food and medicine make essential contributions to human health, which in turn provides rationales for conservation. While wild and cultivated plant diversity reasonably facilitates dietary diversity and positive health outcomes, the challenges of demonstrating this relationship limit its impact in concept, policy, and practice. We present a rationale for testing the dietary contribution of biological diversity to improved eye health as a case study based on existing phytochemical, pharmacological, and clinical knowledge. We consider the empirical evidence needed to substantiate, interpret, and apply this relationship at a population and ecosystem level within a unified research framework. Epidemiological data strongly support the prevention of childhood vitamin A deficiency blindness, cataract, and age-related macular degeneration by fruit and vegetable consumption. Phytonutrients, including the carotenoids lutein and zeaxanthin, protect the eye from oxidative stress and harmful light exposure. Laboratory, community, and population level research should prioritize food composition of dietary plants from both agriculture and the wild. Intervention studies, focus groups, and transmission of knowledge of local species and varieties within communities will further interpretation of epidemiological data. Population-based studies combining clinical data and measures of access and consumption of biological diversity are key to demonstrating the important relationships among biodiversity, dietary diversity, and health outcomes.
Klinke, David J
Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics.
Sellu, George Sahr
schools. Thoron & Meyer (2011) suggested that research into the contribution of integrated science courses toward higher test scores yielded mixed results. This finding may have been due in part to the fact that integrated science courses only incorporate select topics into agriculture education courses. In California, however, agriculture educators have developed standards-based courses such as Agriculture Biology (AgBio) that cover the same content standards as core traditional courses such as traditional biology. Students in both AgBio and traditional biology take the same standardized biology test. This is the first time there has been an opportunity for a fair comparison and a uniform metric for an agriscience course such as AgBio to be directly compared to traditional biology. This study will examine whether there are differences between AgBio and traditional biology with regard to standardized test scores in biology. Furthermore, the study examines differences in perception between teachers and students regarding teaching and learning activities associated with higher achievement in science. The findings of the study could provide a basis for presenting AgBio as a potential alternative to traditional biology. The findings of this study suggest that there are no differences between AgBio and traditional biology students with regard to standardized biology test scores. Additionally, the findings indicate that co-curricular activities in AgBio could contribute higher student achievement in biology. However, further research is required to identify specific activities in AgBio that contribute to higher achievement in science.
This paper compares the concepts of "subject-matter didactics" (Fachdidaktik) with "pedagogical content knowledge". The former is based on German didaktik and has a long tradition. The latter was introduced by Lee Shulman in the late 1980s and has no tradition in the same way as its German counterpart. Both of the concepts deal…
Chein, Michel; Chein, Michel
In addressing the question of how far it is possible to go in knowledge representation and reasoning through graphs, the authors cover basic conceptual graphs, computational aspects, and kernel extensions. The basic mathematical notions are summarized.
Wu, Shinq-Jen; Wu, Cheng-Tao; Chang, Jyh-Yeong
The inverse problem of identifying dynamic biological networks from their time-course response data set is a cornerstone of systems biology. Hill and Michaelis-Menten model, which is a forward approach, provides local kinetic information. However, repeated modifications and a large amount of experimental data are necessary for the parameter identification. S-system model, which is composed of highly nonlinear differential equations, provides the direct identification of an interactive network. However, the identification of skeletal-network structure is challenging. Moreover, biological systems are always subject to uncertainty and noise. Are there suitable candidates with the potential to deal with noise-contaminated data sets? Fuzzy set theory is developed for handing uncertainty, imprecision and complexity in the real world; for example, we say "driving speed is high" wherein speed is a fuzzy variable and high is a fuzzy set, which uses the membership function to indicate the degree of a element belonging to the set (words in Italics to denote fuzzy variables or fuzzy sets). Neural network possesses good robustness and learning capability. In this study we hybrid these two together into a neural-fuzzy modeling technique. A biological system is formulated to a multi-input-multi-output (MIMO) Takagi-Sugeno (T-S) fuzzy system, which is composed of rule-based linear subsystems. Two kinds of smooth membership functions (MFs), Gaussian and Bell-shaped MFs, are used. The performance of the proposed method is tested with three biological systems.
Full Text Available This paper presents a knowledge learning diagnostic approach implemented in an educational system. Probabilistic inference is used here to diagnose knowledge understanding level and to reason about probable cause of learner’s misconceptions. When one learner takes an assessment, the system use probabilistic reasoning and will advice the learner about the most appropriate error cause and will also provide, the conforming part of theory which treats errors related to his misconceptions.
Manufacturing enterprises are under increasing pressure to produce products of higher quality at lower cost in shorter time frames if they are to remain competitive. Engineering design support methods can help companies to achieve these goals. One such approach is design knowledge reuse. Industrial requirements have been identified as (i) the ability to rapidly create product variants; (ii) the ability to capture and re-use design knowledge, and; (iii) the capability to support...
Robeva, Raina; Davies, Robin; Hodge, Terrell; Enyedi, Alexander
We describe an ongoing collaborative curriculum materials development project between Sweet Briar College and Western Michigan University, with support from the National Science Foundation. We present a collection of modules under development that can be used in existing mathematics and biology courses, and we address a critical national need to introduce students to mathematical methods beyond the interface of biology with calculus. Based on ongoing research, and designed to use the project-based-learning approach, the modules highlight applications of modern discrete mathematics and algebraic statistics to pressing problems in molecular biology. For the majority of projects, calculus is not a required prerequisite and, due to the modest amount of mathematical background needed for some of the modules, the materials can be used for an early introduction to mathematical modeling. At the same time, most modules are connected with topics in linear and abstract algebra, algebraic geometry, and probability, and they can be used as meaningful applied introductions into the relevant advanced-level mathematics courses. Open-source software is used to facilitate the relevant computations. As a detailed example, we outline a module that focuses on Boolean models of the lac operon network.
LIU Ju; LI Yong-jian
Emerging technologies are now initiating new industries and transforming old ones with tremendous power. They are different games compared with established technologies with distinctive characteristics of knowledge management in knowledge-based and technological-innovation-based competition. How to obtain knowledge advantage and enhance competences by knowledge sharing for emerging-technology-based strategic alliances (ETBSA) is what we concern in this paper. On the basis of our previous work on emerging technologies'distinctive attributes, we counter the wide spread presumption that the primary purpose of strategic alliances is knowledge acquiring by means of learning. We offers new insight into the knowledge sharing approaches of ETBSAs - the knowledge integrating approach by which each member firm integrates its partner's complementary knowledge base into the products and services and maintains its own knowledge specialization at the same time. So that ETBSAs should plan and practice their knowledge sharing strategies from the angle of knowledge integrating rather than knowledge acquiring. A four-dimensional framework is developed to analyze the advantages and disadvantages of these two knowledge sharing approaches. Some cases in electronic industry are introduced to illustrate our point of view.
Dauer, Joseph T.; Long, Tammy M.
One of the goals of college-level introductory biology is to establish a foundation of knowledge and skills that can be built upon throughout a biology curriculum. In a reformed introductory biology course, we used iterative model construction as a pedagogical tool to promote students' understanding about conceptual connections, particularly those…
Paul Chuchana; Philippe Holzmuller; Frederic Vezilier; David Berthier; Isabelle Chantal; Dany Severac; Jean Loup Lemesre; Gerard Cuny; Philippe Nirdé; Bruno Bucheton
International audience; BACKGROUND: Many tools used to analyze microarrays in different conditions have been described. However, the integration of deregulated genes within coherent metabolic pathways is lacking. Currently no objective selection criterion based on biological functions exists to determine a threshold demonstrating that a gene is indeed differentially expressed. METHODOLOGY/PRINCIPAL FINDINGS: To improve transcriptomic analysis of microarrays, we propose a new statistical appro...
Reed S Beaman
Full Text Available The development of biological informatics infrastructure capable of supporting growing data management and analysis environments is an increasing need within the systematics biology community. Although significant progress has been made in recent years on developing new algorithms and tools for analyzing and visualizing large phylogenetic data and trees, implementation of these resources is often carried out by bioinformatics experts, using one-off scripts. Therefore, a gap exists in providing data management support for a large set of non-technical users. The TOLKIN project (Tree of Life Knowledge and Information Network addresses this need by supporting capabilities to manage, integrate, and provide public access to molecular, morphological, and biocollections data and research outcomes through a collaborative, web application. This data management framework allows aggregation and import of sequences, underlying documentation about their source, including vouchers, tissues, and DNA extraction. It combines features of LIMS and workflow environments by supporting management at the level of individual observations, sequences, and specimens, as well as assembly and versioning of data sets used in phylogenetic inference. As a web application, the system provides multi-user support that obviates current practices of sharing data sets as files or spreadsheets via email.
Coico, Richard; Woodruff-Pak, Diana S
This timely special issue of the Journal of Alzheimer's Disease provides the opportunity to examine interfaces between basic science and clinical medicine using animal models to develop more effective therapies for the treatment and, ideally, prevention of Alzheimer's disease (AD). That some patients with AD enrolled in a clinical trial to inoculate against amyloid-beta (Abeta) experienced a misdirected polarization of Th cells reminds us that our knowledge of T cell biology, immune regulation, and the precise functional properties of adjuvants is incomplete. We review this knowledge and consider the advantages of the rabbit for immunological studies. The langomorph species is proximate to primates on the phylogenetic scale, its amino acid sequence of Abeta is 97% identical to the human Abeta sequence, and the rabbit model system is extensively characterized on a form of associative learning with parallels in normal aging in rabbits and humans that is severely impaired in human AD. Cholesterol-fed rabbits treated with Abeta immunotherapy generate high titer anti-Abeta responses. The cholesterol-fed rabbit model of AD with its close parallels to human genetics and physiology, along with its validity from molecular to cognitive levels as a model of human AD, provides a promising vehicle for development of immunotherapies.
YANG BingRu; SONG Wei; XU ZhangYan
Knowledge acquisition is the bottleneck of expert system. To solve this problem, KD (D&K), which is a comprehensive knowledge discovery process model cooperating both database and knowledge base, and related technology are proposed. Then based on KD (D&K) and related technology, the new construction of Expert System based on Knowledge Discovery (ESKD) is proposed. As the key knowledge acquisition component of ESKD, KD (D&K) is composed of KDD* and KDK*. KDD*-the new process model based on double bases cooperating mechanism; KDK*- the new process model based on double-basis fusion mechanism are introduced, respectively. The overall framework of ESKD is proposed. Some sub-systems and dynamic knowledge base system are discussed. Finally, the effectiveness and advantages of ESKD are tested in a real-world agriculture database. We hope that ESKD may be useful for the new generation of expert systems.
Evidence based educational policy and practice means, first of all, that new programs use relevant scientific knowledge for design purposes, or for critical review of initial program ideas (ex ante evaluation). Secondly, before programs are implemented on a large scale it is considered desirable to
Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.
In the knowledge-based economy, a company performs a set of activities focused on knowledge: identifying necessary knowledge, buying knowledge, learning, acquiring knowledge, creating knowledge, storing knowledge, sharing knowledge, using knowledge, protection of knowledge, capitalizing knowledge. As a result, a new function emerge: the knowledge function. In the knowledge-based companies, not every knowledge has the same impact. The analysis of the actual situations in the most developed an...
Brüningk, Sarah C., E-mail: firstname.lastname@example.org; Kamp, Florian; Wilkens, Jan J. [Department of Radiation Oncology, Technische Universität München, Klinikum rechts der Isar, Ismaninger Str. 22, München 81675, Germany and Physik-Department, Technische Universität München, James-Franck-Str. 1, Garching 85748 (Germany)
Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalent uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon
Lansing, Carina S.; Liu, Yan; Yin, Jian; Corrigan, Abigail L.; Guillen, Zoe C.; Kleese van Dam, Kerstin; Gorton, Ian
Systems Biology research, even more than many other scientific domains, is becoming increasingly data-intensive. Not only have advances in experimental and computational technologies lead to an exponential increase in scientific data volumes and their complexity, but increasingly such databases themselves are providing the basis for new scientific discoveries. To engage effectively with these community resources, integrated analyses, synthesis and simulation software is needed, regularly supported by scientific workflows. In order to provide a more collaborative, community driven research environment for this heterogeneous setting, the Department of Energy (DOE) has decided to develop a federated, cloud based cyber infrastructure - the Systems Biology Knowledgebase (Kbase). Pacific Northwest National Laboratory (PNNL) with its long tradition in data intensive science lead two of the five initial pilot projects, these two focusing on defining and testing the basic federated cloud-based system architecture and develop a prototype implementation. Hereby the community wide accessibility of biological data and the capability to integrate and analyze this data within its changing research context were seen as key technical functionalities the Kbase needed to enable. In this paper we describe the results of our investigations into the design of a cloud based federated infrastructure for: (1) Semantics driven data discovery, access and integration; (2) Data annotation, publication and sharing; (3) Workflow enabled data analysis; and (4) Project based collaborative working. We describe our approach, exemplary use cases and our prototype implementation that demonstrates the feasibility of this approach.
L.M. da Costa Monteiro de Carvalho (Luís); W. van Winden (Willem)
textabstractHow and why do firms interact with and benefit from regionally based sources of knowledge? Although firms increasingly search and source knowledge worldwide and many are inserted in global production and knowledge networks, there is a refreshed interest in the economic geography literatu
Kang, Myunghee; Byun, Hoseung Paul
Provides a conceptual model for a Web-based Knowledge Construction Support System (KCSS) that helps learners acquire factual knowledge and supports the construction of new knowledge through individual internalization and collaboration with other people. Considers learning communities, motivation, cognitive styles, learning strategies,…
Zhou, Yuan; Li, Xin; Lema, Rasmus;
This study uses patent analyses to compare the knowledge bases of leading wind turbine firms in Asia and Europe. It concentrates on the following three aspects: the trajectories of key technologies, external knowledge networks, and the globalisation of knowledge application. Our analyses suggest ...
Ioi, Toshihiro; Ono, Masakazu; Ishii, Kota; Kato, Kazuhiko
Purpose: The purpose of this paper is to propose a method for the transfer of knowledge and skills in project management (PM) based on techniques in knowledge management (KM). Design/methodology/approach: The literature contains studies on methods to extract experiential knowledge in PM, but few studies exist that focus on methods to convert…
Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G
Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.
Full Text Available Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.
Anwari, Nahdi, Maizer Said; Sulistyowati, Eka
Local wisdom as product of local knowledge has been giving a local context in science development. Local wisdom is important to connect scientific theories and local conditions; hence science could be accessed by common people. Using local wisdom as a model for learning science enables students to build contextual learning, hence learning science becomes more meaningful and becomes more accessible for students in a local community. Based on this consideration, therefore, this research developed a model for learning biology based on Turgo's local wisdom on managing biodiversity. For this purpose, Turgo's biodiversity was mapped, and any local values that are co-existing with the biodiversity were recorded. All of these informations were, then, used as a hypohetical model for developing materials for teaching biology in a senior high school adjacent to Turgo. This research employed a qualitative method. We combined questionnaries, interviews and observation to gather the data. We found that Turgo community has been practicing local wisdom on using traditional plants for many uses, including land management and practicing rituals and traditional ceremonies. There were local values that they embrace which enable them to manage the nature wisely. After being cross-referenced with literature regarding educational philoshophy, educational theories and teachings, and biology curriculum for Indonesia's senior high school, we concluded that Turgo's local wisdom on managing biodiversity can be recommended to be used as learning materials and sources for biological learning in schools.
Memišević, Vesna; Pržulj, Nataša
Networks are an invaluable framework for modeling biological systems. Analyzing protein-protein interaction (PPI) networks can provide insight into underlying cellular processes. It is expected that comparison and alignment of biological networks will have a similar impact on our understanding of evolution, biological function, and disease as did sequence comparison and alignment. Here, we introduce a novel pairwise global alignment algorithm called Common-neighbors based GRAph ALigner (C-GRAAL) that uses heuristics for maximizing the number of aligned edges between two networks and is based solely on network topology. As such, it can be applied to any type of network, such as social, transportation, or electrical networks. We apply C-GRAAL to align PPI networks of eukaryotic and prokaryotic species, as well as inter-species PPI networks, and we demonstrate that the resulting alignments expose large connected and functionally topologically aligned regions. We use the resulting alignments to transfer biological knowledge across species, successfully validating many of the predictions. Moreover, we show that C-GRAAL can be used to align human-pathogen inter-species PPI networks and that it can identify patterns of pathogen interactions with host proteins solely from network topology.
Kiefer, Markus; Schmickl, Roswitha; German, Dmitry A; Mandáková, Terezie; Lysak, Martin A; Al-Shehbaz, Ihsan A; Franzke, Andreas; Mummenhoff, Klaus; Stamatakis, Alexandros; Koch, Marcus A
The Brassicaceae family (mustards or crucifers) includes Arabidopsis thaliana as one of the most important model species in plant biology and a number of important crop plants such as the various Brassica species (e.g. cabbage, canola and mustard). Moreover, the family comprises an increasing number of species that serve as study systems in many fields of plant science and evolutionary research. However, the systematics and taxonomy of the family are very complex and access to scientifically valuable and reliable information linked to species and genus names and its interpretation are often difficult. BrassiBase is a continuously developing and growing knowledge database (http://brassibase.cos.uni-heidelberg.de) that aims at providing direct access to many different types of information ranging from taxonomy and systematics to phylo- and cytogenetics. Providing critically revised key information, the database intends to optimize comparative evolutionary research in this family and supports the introduction of the Brassicaceae as the model family for evolutionary biology and plant sciences. Some features that should help to accomplish these goals within a comprehensive taxonomic framework have now been implemented in the new version 1.1.9. A 'Phylogenetic Placement Tool' should help to identify critical accessions and germplasm and provide a first visualization of phylogenetic relationships. The 'Cytogenetics Tool' provides in-depth information on genome sizes, chromosome numbers and polyploidy, and sets this information into a Brassicaceae-wide context.
Aguilar, Alfredo; Bochereau, Laurent; Matthiessen, Line
The European Commission has defined the Knowledge-Based Bio-Economy (KBBE) as the process of transforming life science knowledge into new, sustainable, eco-efficient and competitive products. The term "Bio-Economy" encompasses all industries and economic sectors that produce, manage and otherwise exploit biological resources and related services. Over the last decades biotechnologies have led to innovations in many agricultural, industrial, medical sectors and societal activities. Biotechnology will continue to be a major contributor to the Bio-Economy, playing an essential role in support of economic growth, employment, energy supply and a new generation of bio-products, and to maintain the standard of living. The paper reviews some of the main biotechnology-related research activities at European level. Beyond the 7th Framework Program for Research and Technological Development (FP7), several initiatives have been launched to better integrate FP7 with European national research activities, promote public-private partnerships and create better market and regulatory environments for stimulating innovation.
The aim of this thesis is an analysis of the educational system in Mexico, the role it plays in the knowledge-based economy and the country's ability to compete globally. The practical part of this thesis is focused on a comparison of the knowledge-based economy in Mexico with Latin American and OECD selected countries. The emphasis is placed upon education. The case study compares public and private education in Mexico and their support to the knowledge-based economy.
Zhou, Mengjie; Wang, Rui; Tian, Jing; Ye, Ning; Mai, Shumin
The internet enables the rapid and easy creation, storage, and transfer of knowledge; however, services that transfer geographic knowledge and facilitate the public understanding of geographic knowledge are still underdeveloped to date. Existing online maps (or atlases) can support limited types of geographic knowledge. In this study, we propose a framework for map-based services to represent and transfer different types of geographic knowledge to the public. A map-based service provides tools to ensure the effective transfer of geographic knowledge. We discuss the types of geographic knowledge that should be represented and transferred to the public, and we propose guidelines and a method to represent various types of knowledge through a map-based service. To facilitate the effective transfer of geographic knowledge, tools such as auxiliary background knowledge and auxiliary map-reading tools are provided through interactions with maps. An experiment conducted to illustrate our idea and to evaluate the usefulness of the map-based service is described; the results demonstrate that the map-based service is useful for transferring different types of geographic knowledge.
Knowledge management should be seen as an on-going process that accentuates the role of knowledge based resources in the management of the finrm. Instead of seeing knowledge management purely as a technological solution this article argues that knowledge management should be regarded...... as a metaphorical perspective on management where the manageral focus depends o the epistemological standpoint either of management or the researchers observing management...
Herman H H B M van Haagen
Full Text Available MOTIVATION: Weighted semantic networks built from text-mined literature can be used to retrieve known protein-protein or gene-disease associations, and have been shown to anticipate associations years before they are explicitly stated in the literature. Our text-mining system recognizes over 640,000 biomedical concepts: some are specific (i.e., names of genes or proteins others generic (e.g., 'Homo sapiens'. Generic concepts may play important roles in automated information retrieval, extraction, and inference but may also result in concept overload and confound retrieval and reasoning with low-relevance or even spurious links. Here, we attempted to optimize the retrieval performance for protein-protein interactions (PPI by filtering generic concepts (node filtering or links to generic concepts (edge filtering from a weighted semantic network. First, we defined metrics based on network properties that quantify the specificity of concepts. Then using these metrics, we systematically filtered generic information from the network while monitoring retrieval performance of known protein-protein interactions. We also systematically filtered specific information from the network (inverse filtering, and assessed the retrieval performance of networks composed of generic information alone. RESULTS: Filtering generic or specific information induced a two-phase response in retrieval performance: initially the effects of filtering were minimal but beyond a critical threshold network performance suddenly drops. Contrary to expectations, networks composed exclusively of generic information demonstrated retrieval performance comparable to unfiltered networks that also contain specific concepts. Furthermore, an analysis using individual generic concepts demonstrated that they can effectively support the retrieval of known protein-protein interactions. For instance the concept "binding" is indicative for PPI retrieval and the concept "mutation abnormality" is
Paul F.M.J. Verschure
Full Text Available Antipersonnel mines, weapons of cheap manufacture but lethal effect, have a high impact on the population even decades after the conflicts have finished. Here we investigate the use of a chemo-sensing Unmanned Aerial Vehicle (cUAV for demining tasks. We developed a blimp based UAV that is equipped with a broadly tuned metal-thin oxide chemo-sensor. A number of chemical mapping strategies were investigated including two biologically based localization strategies derived from the moth chemical search that can optimize the efficiency of the detection and localization of explosives and therefore be used in the demining process. Additionally, we developed a control layer that allows for both fully autonomous and manual controlled flight, as well as for the scheduling of a fleet of cUAVs. Our results confirm the feasibility of this technology for demining in real-world scenarios and give further support to a biologically based approach where the understanding of biological systems is used to solve difficult engineering problems.
Tieju Ma; Mina Ryoke; Yoshiteru Nakamori
In this paper, an agent-based simulation about knowledge transition associated with social impact in market is introduced. In the simulation, the genetic algorithm is used to generate the next generation products and a dynamic social impact model is used to simulate how customers are influenced by other customers. The simulation and its results not only show some features and patterns of knowledge transition, but also explore and display some phenomena of business cultures. On the basis of the innovation model of knowledge-based economy, the transition between technical knowledge and products knowledge is discussed, and a fuzzy linear quantification model which can be used to simulate the transition is introduced.
WEI Sheng-jun; HU Chang-zhen; SUN Ming-qian
A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.
Liu, Ming-Chou; Wang, Jhen-Yu
Theme-based learning (TBL) refers to learning modes which adopt the following sequence: (a) finding the theme; (b) finding a focus of interest based on the theme; (c) finding materials based on the focus of interest; (d) integrating the materials to establish shared knowledge; (e) publishing and sharing the integrated knowledge. We have created an…
Paynter, Jessica M.; Keen, Deb
This study investigated staff attitudes, knowledge and use of evidence-based practices (EBP) and links to organisational culture in a community-based autism early intervention service. An EBP questionnaire was completed by 99 metropolitan and regionally-based professional and paraprofessional staff. Participants reported greater knowledge and use…
Krajewski, Sarah J.; Schwartz, Renee
Research supports an explicit-reflective approach to teaching about nature of science (NOS), but little is reported on teachers' journeys as they attempt to integrate NOS into everyday lessons. This participatory action research paper reports the challenges and successes encountered by an in-service teacher, Sarah, implementing NOS for the first time throughout four units of a community college biology course (genetics, molecular biology, evolution, and ecology). Through the action research cycles of planning, implementing, and reflecting, Sarah identified areas of challenge and success. This paper reports emergent themes that assisted her in successfully embedding NOS within the science content. Data include weekly lesson plans and pre/post reflective journaling before and after each lesson of this lecture/lab combination class that met twice a week. This course was taught back to back semesters, and this study is based on the results of a year-long process. Developing pedagogical content knowledge (PCK) for NOS involves coming to understand the overlaps and connections between NOS, other science subject matter, pedagogical strategies, and student learning. Sarah found that through action research she was able to grow and assimilate her understanding of NOS within the biology content she was teaching. A shift in orientation toward teaching products of science to teaching science processes was a necessary shift for NOS pedagogical success. This process enabled Sarah's development of PCK for NOS. As a practical example of putting research-based instructional recommendations into practice, this study may be very useful for other teachers who are learning to teach NOS.
Full Text Available Education and implicitly educational services become extremely important in the context of the knowledge-based society. Therefore, this study investigates the trends in delivering services identified through research of literature, as well as based on personal experience in providing educational services. It has been concluded that information and communication technology creates a vast opportunity to improve the way of delivering educational services within the knowledge-based society, to develop (educate peoples awareness of the need for knowledge, as well as their skills for the knowledge-based society.
Full Text Available In the knowledge-based economy, a company performs a set of activities focused on knowledge: identifying necessary knowledge, buying knowledge, learning, acquiring knowledge, creating knowledge, storing knowledge, sharing knowledge, using knowledge, protection of knowledge, capitalizing knowledge. As a result, a new function emerge: the knowledge function. In the knowledge-based companies, not every knowledge has the same impact. The analysis of the actual situations in the most developed and highly performing companies - based in knowledge, outlines the occurrence of a new category of knowledge – strategic knowledge. Generating this category of knowledge is a new category of challenge for the scientific system.
Shahrul Azman Noah
Full Text Available The inclusion of real world knowledge or specialised knowledge has not been addressed by the majority of the systems reviewed. ODA has real world knowledge provided by using a thesaurus-type structure to represent generic models. Only NITDT includes the specialised knowledge in its knowledge base. NITDT classified its knowledge into application specific, domain specific and general knowledge. However the literature does not discuss in detail how this knowledge is applied during the design session. One of the key factors that distinguish computer-based expert systems from human experts is that the latter apply not only their specialised expertise to a problem but also their general knowledge of the world. NITDT is the only system reviewed here that holds any form of internal domain specific knowledge, which can be easily augmented, enriched and updated, as required. This knowledge allows the designer to be an active participant along with the user in the design process and significantly eases the user task. The inclusion of real world knowledge and specialised knowledge is an area that must be further addressed before intelligent tools are able to offer a realistic level of assistance to the human designers.
H. de Swaan Arons; Ph. Waalewijn
textabstractStrategic Analysis and Planning is a field in which expertise and experience are key factors. In order to decide on strategic matters such as the competitive position of a company experts heavily lean on their ability to reason with uncertain or incomplete knowledge, or in other words on
of the knowledge which is currently in STILL came initially from the open literature (Economopoulos, 1978. Henley and Seader , 1981, Van Winkle, 1967...J. and J. D. Seader . (1981). Equilibrium-Stage Separation Operation.s in Chemical Engineering. New York: John Wiley & Sons. Inc. 1W IMcDermott. J
Grant, John A.
Results of a questionnaire concerning factual knowledge of attitudes toward, and experience with a variety of drugs are reported. It was concluded that marihuana and other drugs are readily available to secondary school students, and widespread experimentation exists; however, a strict dichotomy exists between marihuana and other drugs. (Author/BY)
The process of design aircraft systems is becoming more and more complex, due to an increasing amount of requirements. Moreover, the knowledge on how to solve these complex design problems becomes less readily available, because of a decrease in availability of intellectual resources and reduced kno
Mann, Giora; Dana-Picard, Thierry; Zehavi, Nurit
This article begins with a comparison of two groups of teachers, working on the same tasks in Analytic Geometry. One group has only basic experience in CAS-assisted problem solving, and the other group has extensive experience. The comparison is discussed in terms of the interplay between reflection, operative knowledge and execution. The findings…
Baker, Amanda; Murphy, John
Despite decades of advocacy for greater investigative attention, research into pronunciation instruction in the teaching of English as a second language (ESL) and English as a foreign language (EFL) continues to be limited. This limitation is particularly evident in explorations of teacher cognition (e.g., teachers' knowledge, beliefs, and…
Abadía, Javier; Vázquez, Saúl; Rellán-Álvarez, Rubén; El-Jendoubi, Hamdi; Abadía, Anunciación; Alvarez-Fernández, Ana; López-Millán, Ana Flor
Iron (Fe) deficiency-induced chlorosis is a major nutritional disorder in crops growing in calcareous soils. Iron deficiency in fruit tree crops causes chlorosis, decreases in vegetative growth and marked fruit yield and quality losses. Therefore, Fe fertilizers, either applied to the soil or delivered to the foliage, are used every year to control Fe deficiency in these crops. On the other hand, a substantial body of knowledge is available on the fundamentals of Fe uptake, long and short distance Fe transport and subcellular Fe allocation in plants. Most of this basic knowledge, however, applies only to Fe deficiency, with studies involving Fe fertilization (i.e., with Fe-deficient plants resupplied with Fe) being still scarce. This paper reviews recent developments in Fe-fertilizer research and the state-of-the-art of the knowledge on Fe acquisition, transport and utilization in plants. Also, the effects of Fe-fertilization on the plant responses to Fe deficiency are reviewed. Agronomical Fe-fertilization practices should benefit from the basic knowledge on plant Fe homeostasis already available; this should be considered as a long-term goal that can optimize fertilizer inputs, reduce grower's costs and minimize the environmental impact of fertilization.
Full Text Available This research paper describes the use of computer mediated conferencing (CMC to support the teaching of biology to undergraduates. The use of this pedagogical innovation was a first-time experience for both the instructor and his students. The objectives of this project were to increase students' active participation, to facilitate collaborative knowledge building, and to enhance the use of the scientific approach in problemsolving activities. The data indicate that some positive results were achieved for each objective. The use of online computer conferences shows a lot of promise when it is based on reflection, problem-solving, collaborative learning and knowledge building. Internet conferencing tools support students as they reflect and work together and open doors to numerous new educational experiences for both the students and the professors.
NEMOTO Yutaro; AKASAKA Fumiya; CHIBA Ryosuke; SHIMOMURA Yoshiki
In service engineering,a service is represented as a functional structure that satisfies customer requirements.Specific entities and their activities axe associated with a functional structure as a way to accomplish a goal.In this phase,it is important for service designers to have broad knowledge,since entities that construct a service include both human and physical products.Therefore,the extent of the designer's knowledge is the key to the enhancement of design solutions.However,few tools to support designers in the embodiment phase have been proposed.In this paper,for the purpose of constructing a function embodiment knowledge base in service design,the representational form of knowledge is proposed,and a prototype system of function embodiment knowledge base is established.Then function embodiment knowledge is collected from multiple service cases using the prototype system,and the effectiveness of knowledge base is discussed.
Full Text Available Rule-based methods have traditionally been applied to develop knowledge-based systems that replicate expert performance on a deep but narrow problem domain. Knowledge engineers capture expert knowledge and encode it as a set of rules for automating the expert’s reasoning process to solve problems in a variety of domains. We describe the development of a knowledge-based system approach to enhance program comprehension of Service Oriented Architecture (SOA software. Our approach uses rule-based methods to automate the analysis of the set of artifacts involved in building and deploying a SOA composite application. The rules codify expert knowledge to abstract information from these artifacts to facilitate program comprehension and thus assist Software Engineers as they perform system maintenance activities. A main advantage of the knowledge-based approach is its adaptability to the heterogeneous and dynamically evolving nature of SOA environments.
Ruixue; ZHAO; Guojian; XIAN; Yuantao; KOU; Xuefu; ZHANG; Xinying; AN; Xiaoqiu; LE; Haiyan; BAI
Purpose: The study was carried out to construct a domain knowledge service system based on the Scientific & Technological Knowledge Organization Systems(STKOS). Design/methodology/approach: The framework of a domain knowledge service system is designed on the basis of the STKOS, and the STKOS science and technology vocabularies, category systems, and ontology networks are applied to realize the knowledge organization and semantic linking of the scientific and technological information resources. Meanwhile, related knowledge-mining analysis algorithms and models are improved, and some tools such as Solr and D3 are used for developing the system. This system integrates various knowledge service modules, including unified search of domain information resources and knowledge-linked navigation, domain hotspot and burst topics monitoring analysis, knowledge structure and evolution analysis, literature citation network, and research agents’ cooperative relationship network analysis. Findings: The system can help to refine descriptions, knowledge organization, and the semantic linking of various kinds of information resources closely related to science and technology. Such resources include domain literature, institutions, scientists, projects, and more. Research limitations: Trial assessment and performance improvement should be carried out for the knowledge service application on the basis of more types of and larger quantities of domain information resources.Practical implications: The domain knowledge service system provides an integrated knowledge discovery tool, as well as several kinds of knowledge mining analysis services for researchers.Originality/value: Our practice can be used as a valuable guide for libraries and information institutions that plan to provide deep domain knowledge services.
Valluru, Ravi; Reynolds, Matthew P; Salse, Jerome
Transferring the knowledge bases between related species may assist in enlarging the yield potential of crop plants. Being cereals, rice and wheat share a high level of gene conservation; however, they differ at metabolic levels as a part of the environmental adaptation resulting in different yield capacities. This review focuses on the current understanding of genetic and molecular regulation of yield-associated traits in both crop species, highlights the similarities and differences and presents the putative knowledge gaps. We focus on the traits associated with phenology, photosynthesis, and assimilate partitioning and lodging resistance; the most important drivers of yield potential. Currently, there are large knowledge gaps in the genetic and molecular control of such major biological processes that can be filled in a translational biology approach in transferring genomics and genetics informations between rice and wheat.
Bergstad, O A
This paper summarizes knowledge and knowledge gaps on benthic and benthopelagic deep-water fishes of the North Atlantic Ocean, i.e. species inhabiting deep continental shelf areas, continental and island slopes, seamounts and the Mid-Atlantic Ridge. While several studies demonstrate that distribution patterns are species specific, several also show that assemblages of species can be defined and such assemblages are associated with circulatory features and water mass distributions. In many subareas, sampling has, however, been scattered, restricted to shallow areas or soft substrata, and results from different studies tend to be difficult to compare quantitatively because of sampler differences. Particularly, few studies have been conducted on isolated deep oceanic seamounts and in Arctic deep-water areas. Time series of data are very few and most series are short. Recent studies of population structure of widely distributed demersal species show less than expected present connectivity and considerable spatial genetic heterogeneity and complexity for some species. In other species, genetic homogeneity across wide ranges was discovered. Mechanisms underlying the observed patterns have been proposed, but to test emerging hypotheses more species should be investigated across their entire distribution ranges. Studies of population biology reveal greater diversity in life-history strategies than often assumed, even between co-occurring species of the same family. Some slope and ridge-associated species are rather short-lived, others very long-lived, and growth patterns also show considerable variation. Recent comparative studies suggest variation in life-history strategies along a continuum correlated with depth, ranging from shelf waters to the deep sea where comparatively more species have extended lifetimes, and slow rates of growth and reproduction. Reproductive biology remains too poorly known for most deep-water species, and temporal variation in recruitment has
Full Text Available On 21 March 1996, Eritrea acceded to the Convention on Biological Diversity which, among others, obliges states to sustainably conserve and develop customary uses of biological resources. Among the many forms of traditional practices of biological resources is traditional medicinal knowledge. Research has revealed that Eritrea has abundant pool of such knowledge and a high percentage of its population, as it is true with many developing and underdeveloped countries, resorts to traditional medicine for curing numerous ailments. However, no specific policy or legislative framework has yet been developed to sift, preserve and encourage the practice. Analysis of existing Eritrean laws and policies will show that they are neither adequate nor specific enough to be used in the preservation and development of Eritrean traditional medicinal knowledge. This article will, therefore, in view of the rich, yet unregulated, traditional medicinal knowledge resource in Eritrea, highlight the need for the development of a specific legal instrument legislation for Eritrea from the perspective of international and country level experiences. It will be argued that the development of a specific legislation is preferred to the alternative of keeping traditional medicinal knowledge as a component of a legal instrument developed for a larger mass such as health or traditional knowledge.
“Knowledge is power” (Sir Francis Bacon (1561 – 1626), Religious Meditations, Of Heresis, 1597) “Science is organised knowledge. Wisdom is organised life” (Immanuel Kant (1724 – 1804)) The 29th International Conference on Informatics for Environmental Protection and the third International......) and its Special Interest group Environmental Informatics, Informatics for Environmental Protection, Sustainability and Risk Management, which has a longstanding tradition of discussing fundamental aspects in the field of Environmental Informatics. ICT4S is a series of research conferences bringing......, and interpret environmental information and support the discourse of environmental issues. Over the years, EnviroInfo has gathered communities from European countries and aims to foster scientific progress by bringing together research and practice. ICT for sustainability is about utilising the transformational...
Shkotin, Alex; Kudryavtsev, Dmitry
This paper presents our work on development of OWL-driven systems for formal representation and reasoning about terminological knowledge and facts in petrology. The long-term aim of our project is to provide solid foundations for a large-scale integration of various kinds of knowledge, including basic terms, rock classification algorithms, findings and reports. We describe three steps we have taken towards that goal here. First, we develop a semi-automated procedure for transforming a database of igneous rock samples to texts in a controlled natural language (CNL), and then a collection of OWL ontologies. Second, we create an OWL ontology of important petrology terms currently described in natural language thesauri. We describe a prototype of a tool for collecting definitions from domain experts. Third, we present an approach to formalization of current industrial standards for classification of rock samples, which requires linear equations in OWL 2. In conclusion, we discuss a range of opportunities arising ...