WorldWideScience

Sample records for web ontology languages

  1. SELECTION OF ONTOLOGY FOR WEB SERVICE DESCRIPTION LANGUAGE TO ONTOLOGY WEB LANGUAGE CONVERSION

    OpenAIRE

    J. Mannar Mannan; M. Sundarambal; S. Raghul

    2014-01-01

    Semantic web is to extend the current human readable web to encoding some of the semantic of resources in a machine processing form. As a Semantic web component, Semantic Web Services (SWS) uses a mark-up that makes the data into detailed and sophisticated machine readable way. One such language is Ontology Web Language (OWL). Existing conventional web service annotation can be changed to semantic web service by mapping Web Service Description Language (WSDL) with the semantic annotation of O...

  2. Introduction to Semantic Web Ontology Languages

    NARCIS (Netherlands)

    Antoniou, Grigoris; Franconi, Enrico; Van Harmelen, Frank

    2005-01-01

    The aim of this chapter is to give a general introduction to some of the ontology languages that play a prominent role on the Semantic Web, and to discuss the formal foundations of these languages. Web ontology languages will be the main carriers of the information that we will want to share and

  3. tOWL: a temporal Web Ontology Language.

    Science.gov (United States)

    Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    2012-02-01

    Through its interoperability and reasoning capabilities, the Semantic Web opens a realm of possibilities for developing intelligent systems on the Web. The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies, the cornerstone of the Semantic Web. However, up until now, no standard way of expressing time and time-dependent information in OWL has been provided. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language. Through a layered approach, we introduce three extensions: 1) concrete domains, which allow the representation of restrictions using concrete domain binary predicates; 2) temporal representation , which introduces time points, relations between time points, intervals, and Allen's 13 interval relations into the language; and 3) timeslices/fluents, which implement a perdurantist view on individuals and allow for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by using an example from the financial domain.

  4. ONTOLOGY BASED MEANINGFUL SEARCH USING SEMANTIC WEB AND NATURAL LANGUAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    K. Palaniammal

    2013-10-01

    Full Text Available The semantic web extends the current World Wide Web by adding facilities for the machine understood description of meaning. The ontology based search model is used to enhance efficiency and accuracy of information retrieval. Ontology is the core technology for the semantic web and this mechanism for representing formal and shared domain descriptions. In this paper, we proposed ontology based meaningful search using semantic web and Natural Language Processing (NLP techniques in the educational domain. First we build the educational ontology then we present the semantic search system. The search model consisting three parts which are embedding spell-check, finding synonyms using WordNet API and querying ontology using SPARQL language. The results are both sensitive to spell check and synonymous context. This paper provides more accurate results and the complete details for the selected field in a single page.

  5. Mapping between the OBO and OWL ontology languages.

    Science.gov (United States)

    Tirmizi, Syed Hamid; Aitken, Stuart; Moreira, Dilvan A; Mungall, Chris; Sequeda, Juan; Shah, Nigam H; Miranker, Daniel P

    2011-03-07

    Ontologies are commonly used in biomedicine to organize concepts to describe domains such as anatomies, environments, experiment, taxonomies etc. NCBO BioPortal currently hosts about 180 different biomedical ontologies. These ontologies have been mainly expressed in either the Open Biomedical Ontology (OBO) format or the Web Ontology Language (OWL). OBO emerged from the Gene Ontology, and supports most of the biomedical ontology content. In comparison, OWL is a Semantic Web language, and is supported by the World Wide Web consortium together with integral query languages, rule languages and distributed infrastructure for information interchange. These features are highly desirable for the OBO content as well. A convenient method for leveraging these features for OBO ontologies is by transforming OBO ontologies to OWL. We have developed a methodology for translating OBO ontologies to OWL using the organization of the Semantic Web itself to guide the work. The approach reveals that the constructs of OBO can be grouped together to form a similar layer cake. Thus we were able to decompose the problem into two parts. Most OBO constructs have easy and obvious equivalence to a construct in OWL. A small subset of OBO constructs requires deeper consideration. We have defined transformations for all constructs in an effort to foster a standard common mapping between OBO and OWL. Our mapping produces OWL-DL, a Description Logics based subset of OWL with desirable computational properties for efficiency and correctness. Our Java implementation of the mapping is part of the official Gene Ontology project source. Our transformation system provides a lossless roundtrip mapping for OBO ontologies, i.e. an OBO ontology may be translated to OWL and back without loss of knowledge. In addition, it provides a roadmap for bridging the gap between the two ontology languages in order to enable the use of ontology content in a language independent manner.

  6. Ontology Enabled Generation of Embedded Web Services

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Zhang, Weishan; Soares, Goncalo Teofilo Afonso Pinheiro

    2008-01-01

    and software platforms, and of devices state and context changes. To address these challenges, we developed a Web service compiler, Limbo, in which Web Ontology Language (OWL) ontologies are used to make the Limbo compiler aware of its compilation context, such as targeted hardware and software. At the same...... time, knowledge on device details, platform dependencies, and resource/power consumption is built into the supporting ontologies, which are used to configure Limbo for generating resource efficient web service code. A state machine ontology is used to generate stub code to facilitate handling of state...

  7. Model Problems in Technologies for Interoperability: OWL Web Ontology Language for Services (OWL-S)

    National Research Council Canada - National Science Library

    Metcalf, Chris; Lewis, Grace A

    2006-01-01

    .... The OWL Web Ontology Language for Services (OWL-S) is a language to describe the properties and capabilities of Web Services in such a way that the descriptions can be interpreted by a computer system in an automated manner. This technical note presents the results of applying the model problem approach to examine the feasibility of using OWL-S to allow applications to automatically discover, compose, and invoke services in a dynamic services-oriented environment.

  8. Ontology Enabled Generation of Embedded Web Services

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Zhang, Weishan; Soares, Goncalo Teofilo Afonso Pinheiro

    2008-01-01

    Web services are increasingly adopted as a service provision mechanism in pervasive computing environments. Implementing web services on networked, embedded devices raises a number of challenges, for example efficiency of web services, handling of variability and dependencies of hardware...... and software platforms, and of devices state and context changes. To address these challenges, we developed a Web service compiler, Limbo, in which Web Ontology Language (OWL) ontologies are used to make the Limbo compiler aware of its compilation context, such as targeted hardware and software. At the same...... time, knowledge on device details, platform dependencies, and resource/power consumption is built into the supporting ontologies, which are used to configure Limbo for generating resource efficient web service code. A state machine ontology is used to generate stub code to facilitate handling of state...

  9. An ontology-driven tool for structured data acquisition using Web forms.

    Science.gov (United States)

    Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A

    2017-08-01

    Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.

  10. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    Science.gov (United States)

    2006-08-01

    Sycara, and T. Nishimura, "Towards a Semantic Web Ecommerce ," in Proceedings of 6th Conference on Business Information Systems (BIS2003), Colorado...the ontology used is the fictitious ontology http://fly.com/Onto. The advantage of using concepts from Web-addressable ontologies, rather than XML...the advantage of the OWL-S approach compared with other approaches, namely BPEL4WS and WS-CDL, is that OWL-S allows the flexibility to change the

  11. Web Ontologies to Categorialy Structure Reality: Representations of Human Emotional, Cognitive and Motivational Processes

    Directory of Open Access Journals (Sweden)

    Juan-Miguel eLópez-Gil

    2016-04-01

    Full Text Available This work presents a Web ontology for modeling and representation of the emotional, cognitive and motivational state of online learners, interacting with university systems for distance or blended education. The ontology is understood as a way to provide the required mechanisms to model reality and associate it to emotional responses, but without committing to a particular way of organizing these emotional responses. Knowledge representation for the contributed ontology is performed by using Web Ontology Language (OWL, a semantic web language designed to represent rich and complex knowledge about things, groups of things, and relations between things. OWL is a computational logic-based language such that computer programs can exploit knowledge expressed in OWL and also facilitates sharing and reusing knowledge using the global infrastructure of the Web. The proposed ontology has been tested in the field of Massive Open Online Courses (MOOCs to check if it is capable of representing emotions and motivation of the students in this context of use.

  12. Web Ontologies to Categorialy Structure Reality: Representations of Human Emotional, Cognitive, and Motivational Processes

    Science.gov (United States)

    López-Gil, Juan-Miguel; Gil, Rosa; García, Roberto

    2016-01-01

    This work presents a Web ontology for modeling and representation of the emotional, cognitive and motivational state of online learners, interacting with university systems for distance or blended education. The ontology is understood as a way to provide the required mechanisms to model reality and associate it to emotional responses, but without committing to a particular way of organizing these emotional responses. Knowledge representation for the contributed ontology is performed by using Web Ontology Language (OWL), a semantic web language designed to represent rich and complex knowledge about things, groups of things, and relations between things. OWL is a computational logic-based language such that computer programs can exploit knowledge expressed in OWL and also facilitates sharing and reusing knowledge using the global infrastructure of the Web. The proposed ontology has been tested in the field of Massive Open Online Courses (MOOCs) to check if it is capable of representing emotions and motivation of the students in this context of use. PMID:27199796

  13. The use of web ontology languages and other semantic web tools in drug discovery.

    Science.gov (United States)

    Chen, Huajun; Xie, Guotong

    2010-05-01

    To optimize drug development processes, pharmaceutical companies require principled approaches to integrate disparate data on a unified infrastructure, such as the web. The semantic web, developed on the web technology, provides a common, open framework capable of harmonizing diversified resources to enable networked and collaborative drug discovery. We survey the state of art of utilizing web ontologies and other semantic web technologies to interlink both data and people to support integrated drug discovery across domains and multiple disciplines. Particularly, the survey covers three major application categories including: i) semantic integration and open data linking; ii) semantic web service and scientific collaboration and iii) semantic data mining and integrative network analysis. The reader will gain: i) basic knowledge of the semantic web technologies; ii) an overview of the web ontology landscape for drug discovery and iii) a basic understanding of the values and benefits of utilizing the web ontologies in drug discovery. i) The semantic web enables a network effect for linking open data for integrated drug discovery; ii) The semantic web service technology can support instant ad hoc collaboration to improve pipeline productivity and iii) The semantic web encourages publishing data in a semantic way such as resource description framework attributes and thus helps move away from a reliance on pure textual content analysis toward more efficient semantic data mining.

  14. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  15. CNTRO: A Semantic Web Ontology for Temporal Relation Inferencing in Clinical Narratives.

    Science.gov (United States)

    Tao, Cui; Wei, Wei-Qi; Solbrig, Harold R; Savova, Guergana; Chute, Christopher G

    2010-11-13

    Using Semantic-Web specifications to represent temporal information in clinical narratives is an important step for temporal reasoning and answering time-oriented queries. Existing temporal models are either not compatible with the powerful reasoning tools developed for the Semantic Web, or designed only for structured clinical data and therefore are not ready to be applied on natural-language-based clinical narrative reports directly. We have developed a Semantic-Web ontology which is called Clinical Narrative Temporal Relation ontology. Using this ontology, temporal information in clinical narratives can be represented as RDF (Resource Description Framework) triples. More temporal information and relations can then be inferred by Semantic-Web based reasoning tools. Experimental results show that this ontology can represent temporal information in real clinical narratives successfully.

  16. Semantator: annotating clinical narratives with semantic web ontologies.

    Science.gov (United States)

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  17. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  18. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  19. A Customizable Language Learning Support System Using Ontology-Driven Engine

    Science.gov (United States)

    Wang, Jingyun; Mendori, Takahiko; Xiong, Juan

    2013-01-01

    This paper proposes a framework for web-based language learning support systems designed to provide customizable pedagogical procedures based on the analysis of characteristics of both learner and course. This framework employs a course-centered ontology and a teaching method ontology as the foundation for the student model, which includes learner…

  20. Flexible Generation of Pervasive Web Services using OSGi Declarative Services and OWL Ontologies

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Zhang, Weishan; Fernandes, Joao

    2008-01-01

    There is a growing trend to deploy web services in pervasive computing environments. Implementing web services on networked, embedded devices leads to a set of challenges, including productivity of development, efficiency of web services, and handling of variability and dependencies of hardware...... and software platforms. To address these challenges, we developed a web service compiler called Limbo, in which Web Ontology Language (OWL) ontologies are used to make the Limbo compiler aware of its compilation context such as device hardware and software details, platform dependencies, and resource....../power consumption. The ontologies are used to configure Limbo for generating resource-efficient web service code. The architecture of Limbo follows the Blackboard architectural style and Limbo is implemented using the OSGi Declarative Services component model. The component model provides high flexibility...

  1. An open annotation ontology for science on web 3.0.

    Science.gov (United States)

    Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim

    2011-05-17

    There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for

  2. Ontology Versioning and Change Detection on the Web

    NARCIS (Netherlands)

    Klein, Michel; Fensel, Dieter; Kiryakov, Atanas; Ognyanov, Damyan

    2002-01-01

    To effectively use ontologies on the Web, it is essential that changes in ontologies are managed well. This paper analyzes the topic of ontology versioning in the context of the Web by looking at the characteristics of the version relation between ontologies and at the identification of online

  3. OWLing Clinical Data Repositories With the Ontology Web Language.

    Science.gov (United States)

    Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther

    2014-08-01

    The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.

  4. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  5. Seven golden rules for a web rule language

    NARCIS (Netherlands)

    Wagner, G.R.

    2003-01-01

    Web Ontology Language is now the W3C’s candidate recommendation, 1 which makes me think that the promises of the Semantic Web will come closer to being realities.2 Right? A close reading of the famous Scientific American article and comparison with OWL reveals, however, that OWL cannot account for

  6. PENCARIAN BUDAYA MENGGUNAKAN ONTOLOGI DAN ATURAN BERBASIS SEMANTIC WEB UNTUK SISWA SD

    Directory of Open Access Journals (Sweden)

    Rendra Husni Thamrin

    2016-10-01

    Full Text Available Internet users every year have increased rapidly. One function of the Internet is that it is used as a source of information. Keywords used in search engine will help to find information. Semantic Web technologies can be used to help make the search more effective system either globally or specifically. Certain material in this study could use material about Indonesian art and culture taught in social studies class IV Elementary School in the odd semester.  Ontology bridges the differences in perception between humans with a machine that generally break the words then looking into the query. In addition SWRL (Semantic Web Rule Language is used by Ontology which has already made.

  7. Product line based ontology development for semantic web service

    DEFF Research Database (Denmark)

    Zhang, Weishan; Kunz, Thomas

    2006-01-01

    Ontology is recognized as a key technology for the success of the Semantic Web. Building reusable and evolve-able ontologies in order to cope with ontology evolution and requirement changes is increasingly important. But the existing methodologies and tools fail to support effective ontology reuse...... will lead to the initial implementation of the meta-onotologies using design by reuse and with the objective of design for reuse. After that step new ontologies could be generated by reusing these meta-ontologies. We demonstrate our approach with a Semantic Web Service application to show how to build...

  8. Ontology alignment architecture for semantic sensor Web integration.

    Science.gov (United States)

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R; Alarcos, Bernardo

    2013-09-18

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall.

  9. Ontology Alignment Architecture for Semantic Sensor Web Integration

    Directory of Open Access Journals (Sweden)

    Bernardo Alarcos

    2013-09-01

    Full Text Available Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity. Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity’s names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall.

  10. Towards Web 3.0: taxonomies and ontologies for medical education -- a systematic review.

    Science.gov (United States)

    Blaum, Wolf E; Jarczweski, Anne; Balzer, Felix; Stötzner, Philip; Ahlers, Olaf

    2013-01-01

    Both for curricular development and mapping, as well as for orientation within the mounting supply of learning resources in medical education, the Semantic Web ("Web 3.0") poses a low-threshold, effective tool that enables identification of content related items across system boundaries. Replacement of the currently required manual with an automatically generated link, which is based on content and semantics, requires the use of a suitably structured vocabulary for a machine-readable description of object content. Aim of this study is to compile the existing taxonomies and ontologies used for the annotation of medical content and learning resources, to compare those using selected criteria, and to verify their suitability in the context described above. Based on a systematic literature search, existing taxonomies and ontologies for the description of medical learning resources were identified. Through web searches and/or direct contact with the respective editors, each of the structured vocabularies thus identified were examined in regards to topic, structure, language, scope, maintenance, and technology of the taxonomy/ontology. In addition, suitability for use in the Semantic Web was verified. Among 20 identified publications, 14 structured vocabularies were identified, which differed rather strongly in regards to language, scope, currency, and maintenance. None of the identified vocabularies fulfilled the necessary criteria for content description of medical curricula and learning resources in the German-speaking world. While moving towards Web 3.0, a significant problem lies in the selection and use of an appropriate German vocabulary for the machine-readable description of object content. Possible solutions include development, translation and/or combination of existing vocabularies, possibly including partial translations of English vocabularies.

  11. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration

    Science.gov (United States)

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-01

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503

  12. Building a biomedical ontology recommender web service

    Directory of Open Access Journals (Sweden)

    Jonquet Clement

    2010-06-01

    Full Text Available Abstract Background Researchers in biomedical informatics use ontologies and terminologies to annotate their data in order to facilitate data integration and translational discoveries. As the use of ontologies for annotation of biomedical datasets has risen, a common challenge is to identify ontologies that are best suited to annotating specific datasets. The number and variety of biomedical ontologies is large, and it is cumbersome for a researcher to figure out which ontology to use. Methods We present the Biomedical Ontology Recommender web service. The system uses textual metadata or a set of keywords describing a domain of interest and suggests appropriate ontologies for annotating or representing the data. The service makes a decision based on three criteria. The first one is coverage, or the ontologies that provide most terms covering the input text. The second is connectivity, or the ontologies that are most often mapped to by other ontologies. The final criterion is size, or the number of concepts in the ontologies. The service scores the ontologies as a function of scores of the annotations created using the National Center for Biomedical Ontology (NCBO Annotator web service. We used all the ontologies from the UMLS Metathesaurus and the NCBO BioPortal. Results We compare and contrast our Recommender by an exhaustive functional comparison to previously published efforts. We evaluate and discuss the results of several recommendation heuristics in the context of three real world use cases. The best recommendations heuristics, rated ‘very relevant’ by expert evaluators, are the ones based on coverage and connectivity criteria. The Recommender service (alpha version is available to the community and is embedded into BioPortal.

  13. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration.

    Science.gov (United States)

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-04

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Webulous and the Webulous Google Add-On--a web service and application for ontology building from templates.

    Science.gov (United States)

    Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James

    2016-01-01

    Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.

  15. Agent Based Knowledge Management Solution using Ontology, Semantic Web Services and GIS

    Directory of Open Access Journals (Sweden)

    Andreea DIOSTEANU

    2009-01-01

    Full Text Available The purpose of our research is to develop an agent based knowledge management application framework using a specific type of ontology that is able to facilitate semantic web service search and automatic composition. This solution can later on be used to develop complex solutions for location based services, supply chain management, etc. This application for modeling knowledge highlights the importance of agent interaction that leads to efficient enterprise interoperability. Furthermore, it proposes an "agent communication language" ontology that extends the OWL Lite standard approach and makes it more flexible in retrieving proper data for identifying the agents that can best communicate and negotiate.

  16. A semantic web ontology for small molecules and their biological targets.

    Science.gov (United States)

    Choi, Jooyoung; Davis, Melissa J; Newman, Andrew F; Ragan, Mark A

    2010-05-24

    A wide range of data on sequences, structures, pathways, and networks of genes and gene products is available for hypothesis testing and discovery in biological and biomedical research. However, data describing the physical, chemical, and biological properties of small molecules have not been well-integrated with these resources. Semantically rich representations of chemical data, combined with Semantic Web technologies, have the potential to enable the integration of small molecule and biomolecular data resources, expanding the scope and power of biomedical and pharmacological research. We employed the Semantic Web technologies Resource Description Framework (RDF) and Web Ontology Language (OWL) to generate a Small Molecule Ontology (SMO) that represents concepts and provides unique identifiers for biologically relevant properties of small molecules and their interactions with biomolecules, such as proteins. We instanced SMO using data from three public data sources, i.e., DrugBank, PubChem and UniProt, and converted to RDF triples. Evaluation of SMO by use of predetermined competency questions implemented as SPARQL queries demonstrated that data from chemical and biomolecular data sources were effectively represented and that useful knowledge can be extracted. These results illustrate the potential of Semantic Web technologies in chemical, biological, and pharmacological research and in drug discovery.

  17. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    Directory of Open Access Journals (Sweden)

    Skene JH Pate

    2007-09-01

    Full Text Available Abstract Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD, which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1 install Apache Tomcat 2 on one's web server, (2 download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3 create an html (HyperText Markup Language file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed Ontology

  18. A Hydrological Sensor Web Ontology Based on the SSN Ontology: A Case Study for a Flood

    Directory of Open Access Journals (Sweden)

    Chao Wang

    2017-12-01

    Full Text Available Accompanying the continuous development of sensor network technology, sensors worldwide are constantly producing observation data. However, the sensors and their data from different observation platforms are sometimes difficult to use collaboratively in response to natural disasters such as floods for the lack of semantics. In this paper, a hydrological sensor web ontology based on SSN ontology is proposed to describe the heterogeneous hydrological sensor web resources by importing the time and space ontology, instantiating the hydrological classes, and establishing reasoning rules. This work has been validated by semantic querying and knowledge acquiring experiments. The results demonstrate the feasibility and effectiveness of the proposed ontology and its potential to grow into a more comprehensive ontology for hydrological monitoring collaboratively. In addition, this method of ontology modeling is generally applicable to other applications and domains.

  19. Self-adaptation of Ontologies to Folksonomies in Semantic Web

    OpenAIRE

    Francisco Echarte; José Javier Astrain; Alberto Córdoba; Jesús Villadangos

    2008-01-01

    Ontologies and tagging systems are two different ways to organize the knowledge present in the current Web. In this paper we propose a simple method to model folksonomies, as tagging systems, with ontologies. We show the scalability of the method using real data sets. The modeling method is composed of a generic ontology that represents any folksonomy and an algorithm to transform the information contained in folksonomies to the generic ontology. The method allows representing folksonomies at...

  20. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  1. TermGenie - a web-application for pattern-based ontology class generation.

    Science.gov (United States)

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  2. Ontological support for web courseware authoring

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.; Cristea, A.I.; Cerri, S.A.; Gouardères, G.; Paraguaçu, F.

    2002-01-01

    In this paper we present an ontology- oriented authoring support system for Web-based courseware. This is an elaboration of our approach to knowledge classification and indexing in the previously developed system AIMS (Agent-based Information Management System) aimed at supporting students while

  3. Foundations of semantic web technologies

    CERN Document Server

    Hitzler, Pascal; Rudolph, Sebastian

    2009-01-01

    The Quest for Semantics Building Models Calculating with Knowledge Exchanging Information Semanic Web Technologies RESOURCE DESCRIPTION LANGUAGE (RDF)Simple Ontologies in RDF and RDF SchemaIntroduction to RDF Syntax for RDF Advanced Features Simple Ontologies in RDF Schema Encoding of Special Data Structures An ExampleRDF Formal Semantics Why Semantics? Model-Theoretic Semantics for RDF(S) Syntactic Reasoning with Deduction Rules The Semantic Limits of RDF(S)WEB ONTOLOGY LANGUAGE (OWL) Ontologies in OWL OWL Syntax and Intuitive Semantics OWL Species The Forthcoming OWL 2 StandardOWL Formal Sem

  4. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    Science.gov (United States)

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. © RSNA, 2014.

  5. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  6. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  7. Language and embodied consciousness: A Peircean ontological ...

    African Journals Online (AJOL)

    An ontology of language: its source and place in First Language ... knowledge they supposedly gain in school with their immediate environment and their lived .... looking stick in space looks bent at the point it enters the medium of water.

  8. Perspectives on ontology learning

    CERN Document Server

    Lehmann, J

    2014-01-01

    Perspectives on Ontology Learning brings together researchers and practitioners from different communities − natural language processing, machine learning, and the semantic web − in order to give an interdisciplinary overview of recent advances in ontology learning.Starting with a comprehensive introduction to the theoretical foundations of ontology learning methods, the edited volume presents the state-of-the-start in automated knowledge acquisition and maintenance. It outlines future challenges in this area with a special focus on technologies suitable for pushing the boundaries beyond the c

  9. Webscripter: End-User Tools for Composition Ontology-Enabled Web Services

    National Research Council Canada - National Science Library

    Frank, Martin

    2005-01-01

    ... (schemes or ontologies) with respect to objects. The DARPA Agent Markup Language (DAML), through the use of ontologies, provides a very powerful way to describe objects and their relationships to other objects...

  10. Aber-OWL: a framework for ontology-based data access in biology

    KAUST Repository

    Hoehndorf, Robert

    2015-01-28

    Background: Many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within these ontologies relies on the use of automated reasoning. Results: We have developed the Aber-OWL infrastructure that provides reasoning services for bio-ontologies. Aber-OWL consists of an ontology repository, a set of web services and web interfaces that enable ontology-based semantic access to biological data and literature. Aber-OWL is freely available at http://aber-owl.net. Conclusions: Aber-OWL provides a framework for automatically accessing information that is annotated with ontologies or contains terms used to label classes in ontologies. When using Aber-OWL, access to ontologies and data annotated with them is not merely based on class names or identifiers but rather on the knowledge the ontologies contain and the inferences that can be drawn from it.

  11. Development of an Ontology for Occupational Exposure

    Science.gov (United States)

    When discussing a scientific domain, the use of a common language is required, particularly when communicating across disciplines. This common language, or ontology, is a prescribed vocabulary and a web of contextual relationships within the vocabulary that describe the given dom...

  12. Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.

    Science.gov (United States)

    Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan

    2017-10-01

    Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.

  13. Using C-OWL for the Alignment and Merging of Medical Ontologies

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; van Harmelen, Frank; Serafini, Luciano; Bouquet, Paolo; Giunchiglia, Fausto

    2004-01-01

    A number of sophisticated medical ontologies have been created over the past years. With their de-velopment the need for supporting the alignment of different ontologies is gaining importance. We proposed C-OWL, an extension of the Web Ontology Language OWL that supports alignment mappings between

  14. Ontology and medical diagnosis.

    Science.gov (United States)

    Bertaud-Gounot, Valérie; Duvauferrier, Régis; Burgun, Anita

    2012-03-01

    Ontology and associated generic tools are appropriate for knowledge modeling and reasoning, but most of the time, disease definitions in existing description logic (DL) ontology are not sufficient to classify patient's characteristics under a particular disease because they do not formalize operational definitions of diseases (association of signs and symptoms=diagnostic criteria). The main objective of this study is to propose an ontological representation which takes into account the diagnostic criteria on which specific patient conditions may be classified under a specific disease. This method needs as a prerequisite a clear list of necessary and sufficient diagnostic criteria as defined for lots of diseases by learned societies. It does not include probability/uncertainty which Web Ontology Language (OWL 2.0) cannot handle. We illustrate it with spondyloarthritis (SpA). Ontology has been designed in Protégé 4.1 OWL-DL2.0. Several kinds of criteria were formalized: (1) mandatory criteria, (2) picking two criteria among several diagnostic criteria, (3) numeric criteria. Thirty real patient cases were successfully classified with the reasoner. This study shows that it is possible to represent operational definitions of diseases with OWL and successfully classify real patient cases. Representing diagnostic criteria as descriptive knowledge (instead of rules in Semantic Web Rule Language or Prolog) allows us to take advantage of tools already available for OWL. While we focused on Assessment of SpondyloArthritis international Society SpA criteria, we believe that many of the representation issues addressed here are relevant to using OWL-DL for operational definition of other diseases in ontology.

  15. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    Science.gov (United States)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  16. Computing an Ontological Semantics for a Natural Language Fragment

    DEFF Research Database (Denmark)

    Szymczak, Bartlomiej Antoni

    tried to establish a domain independent “ontological semantics” for relevant fragments of natural language. The purpose of this research is to develop methods and systems for taking advantage of formal ontologies for the purpose of extracting the meaning contents of texts. This functionality...

  17. A Process for the Representation of openEHR ADL Archetypes in OWL Ontologies.

    Science.gov (United States)

    Porn, Alex Mateus; Peres, Leticia Mara; Didonet Del Fabro, Marcos

    2015-01-01

    ADL is a formal language to express archetypes, independent of standards or domain. However, its specification is not precise enough in relation to the specialization and semantic of archetypes, presenting difficulties in implementation and a few available tools. Archetypes may be implemented using other languages such as XML or OWL, increasing integration with Semantic Web tools. Exchanging and transforming data can be better implemented with semantics oriented models, for example using OWL which is a language to define and instantiate Web ontologies defined by W3C. OWL permits defining significant, detailed, precise and consistent distinctions among classes, properties and relations by the user, ensuring the consistency of knowledge than using ADL techniques. This paper presents a process of an openEHR ADL archetypes representation in OWL ontologies. This process consists of ADL archetypes conversion in OWL ontologies and validation of OWL resultant ontologies using the mutation test.

  18. Benchmarking the Applicability of Ontology in Geographic Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Sachit Rajbhandari

    2017-11-01

    Full Text Available In Geographic Object-based Image Analysis (GEOBIA, identification of image objects is normally achieved using rule-based classification techniques supported by appropriate domain knowledge. However, GEOBIA currently lacks a systematic method to formalise the domain knowledge required for image object identification. Ontology provides a representation vocabulary for characterising domain-specific classes. This study proposes an ontological framework that conceptualises domain knowledge in order to support the application of rule-based classifications. The proposed ontological framework is tested with a landslide case study. The Web Ontology Language (OWL is used to construct an ontology in the landslide domain. The segmented image objects with extracted features are incorporated into the ontology as instances. The classification rules are written in Semantic Web Rule Language (SWRL and executed using a semantic reasoner to assign instances to appropriate landslide classes. Machine learning techniques are used to predict new threshold values for feature attributes in the rules. Our framework is compared with published work on landslide detection where ontology was not used for the image classification. Our results demonstrate that a classification derived from the ontological framework accords with non-ontological methods. This study benchmarks the ontological method providing an alternative approach for image classification in the case study of landslides.

  19. Experiences with Aber-OWL, an Ontology Repository with OWL EL Reasoning

    KAUST Repository

    Slater, Luke; Rodriguez-Garcia, Miguel Angel; O’ Shea, Keiron; Schofield, Paul N.; Gkoutos, Georgios V.; Hoehndorf, Robert

    2016-01-01

    expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within them relies on the use of automated reasoning. We have developed Aber-OWL, an ontology repository that provides OWL EL reasoning to answer queries and verify

  20. Methodology for Automatic Ontology Generation Using Database Schema Information

    Directory of Open Access Journals (Sweden)

    JungHyen An

    2018-01-01

    Full Text Available An ontology is a model language that supports the functions to integrate conceptually distributed domain knowledge and infer relationships among the concepts. Ontologies are developed based on the target domain knowledge. As a result, methodologies to automatically generate an ontology from metadata that characterize the domain knowledge are becoming important. However, existing methodologies to automatically generate an ontology using metadata are required to generate the domain metadata in a predetermined template, and it is difficult to manage data that are increased on the ontology itself when the domain OWL (Ontology Web Language individuals are continuously increased. The database schema has a feature of domain knowledge and provides structural functions to efficiently process the knowledge-based data. In this paper, we propose a methodology to automatically generate ontologies and manage the OWL individual through an interaction of the database and the ontology. We describe the automatic ontology generation process with example schema and demonstrate the effectiveness of the automatically generated ontology by comparing it with existing ontologies using the ontology quality score.

  1. Conceptual Web Users' Actions Prediction for Ontology-Based Browsing Recommendations

    Science.gov (United States)

    Robal, Tarmo; Kalja, Ahto

    The Internet consists of thousands of web sites with different kinds of structures. However, users are browsing the web according to their informational expectations towards the web site searched, having an implicit conceptual model of the domain in their minds. Nevertheless, people tend to repeat themselves and have partially shared conceptual views while surfing the web, finding some areas of web sites more interesting than others. Herein, we take advantage of the latter and provide a model and a study on predicting users' actions based on the web ontology concepts and their relations.

  2. Ontology Language to Support Description of Experiment Control System Semantics, Collaborative Knowledge-Base Design and Ontology Reuse

    International Nuclear Information System (INIS)

    Gyurjyan, Vardan; Abbott, D.; Heyes, G.; Jastrzembski, E.; Moffit, B.; Timmer, C.; Wolin, E.

    2009-01-01

    In this paper we discuss the control domain specific ontology that is built on top of the domain-neutral Resource Definition Framework (RDF). Specifically, we will discuss the relevant set of ontology concepts along with the relationships among them in order to describe experiment control components and generic event-based state machines. Control Oriented Ontology Language (COOL) is a meta-data modeling language that provides generic means for representation of physics experiment control processes and components, and their relationships, rules and axioms. It provides a semantic reference frame that is useful for automating the communication of information for configuration, deployment and operation. COOL has been successfully used to develop a complete and dynamic knowledge-base for experiment control systems, developed using the AFECS framework.

  3. The chemical information ontology: provenance and disambiguation for chemical data on the biological semantic web.

    Science.gov (United States)

    Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel

    2011-01-01

    Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA).

  4. The chemical information ontology: provenance and disambiguation for chemical data on the biological semantic web.

    Directory of Open Access Journals (Sweden)

    Janna Hastings

    Full Text Available Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA.

  5. The Chemical Information Ontology: Provenance and Disambiguation for Chemical Data on the Biological Semantic Web

    Science.gov (United States)

    Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel

    2011-01-01

    Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA). PMID:21991315

  6. OntoPop: An Ontology Population System for the Semantic Web

    Science.gov (United States)

    Thongkrau, Theerayut; Lalitrojwong, Pattarachai

    The development of ontology at the instance level requires the extraction of the terms defining the instances from various data sources. These instances then are linked to the concepts of the ontology, and relationships are created between these instances for the next step. However, before establishing links among data, ontology engineers must classify terms or instances from a web document into an ontology concept. The tool for help ontology engineer in this task is called ontology population. The present research is not suitable for ontology development applications, such as long time processing or analyzing large or noisy data sets. OntoPop system introduces a methodology to solve these problems, which comprises two parts. First, we select meaningful features from syntactic relations, which can produce more significant features than any other method. Second, we differentiate feature meaning and reduce noise based on latent semantic analysis. Experimental evaluation demonstrates that the OntoPop works well, significantly out-performing the accuracy of 49.64%, a learning accuracy of 76.93%, and executes time of 5.46 second/instance.

  7. A web ontology for brain trauma patient computer-assisted rehabilitation.

    Science.gov (United States)

    Zikos, Dimitrios; Galatas, George; Metsis, Vangelis; Makedon, Fillia

    2013-01-01

    In this paper we describe CABROnto, which is a web ontology for the semantic representation of the computer assisted brain trauma rehabilitation. This is a novel and emerging domain, since it employs the use of robotic devices, adaptation software and machine learning to facilitate interactive and adaptive rehabilitation care. We used Protégé 4.2 and Protégé-Owl schema editor. The primary goal of this ontology is to enable the reuse of the domain knowledge. CABROnto has nine main classes, more than 50 subclasses, existential and cardinality restrictions. The ontology can be found online at Bioportal.

  8. Ontology Design of Influential People Identification Using Centrality

    Science.gov (United States)

    Maulana Awangga, Rolly; Yusril, Muhammad; Setyawan, Helmi

    2018-04-01

    Identifying influential people as a node in a graph theory commonly calculated by social network analysis. The social network data has the user as node and edge as relation forming a friend relation graph. This research is conducting different meaning of every nodes relation in the social network. Ontology was perfect match science to describe the social network data as conceptual and domain. Ontology gives essential relationship in a social network more than a current graph. Ontology proposed as a standard for knowledge representation for the semantic web by World Wide Web Consortium. The formal data representation use Resource Description Framework (RDF) and Web Ontology Language (OWL) which is strategic for Open Knowledge-Based website data. Ontology used in the semantic description for a relationship in the social network, it is open to developing semantic based relationship ontology by adding and modifying various and different relationship to have influential people as a conclusion. This research proposes a model using OWL and RDF for influential people identification in the social network. The study use degree centrality, between ness centrality, and closeness centrality measurement for data validation. As a conclusion, influential people identification in Facebook can use proposed Ontology model in the Group, Photos, Photo Tag, Friends, Events and Works data.

  9. Experiences with Aber-OWL, an Ontology Repository with OWL EL Reasoning

    KAUST Repository

    Slater, Luke

    2016-04-19

    Ontologies are widely used in biology and biomedicine for the annotation and integration of data, and hundreds of ontologies have been developed for this purpose. These ontologies also constitute large volumes of formalized domain knowledge, usually expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within them relies on the use of automated reasoning. We have developed Aber-OWL, an ontology repository that provides OWL EL reasoning to answer queries and verify the consistency of ontologies. Aber-OWL also provides a set of web services which provide ontology-based access to scientific literature in Pubmed and Pubmed Central, SPARQL query expansion to retrieve linked data, and integration with Bio2RDF. Here, we report on our experiences with Aber-OWL and outline a roadmap for future development. Aber-OWL is freely available at http://aber-owl.net.

  10. An Ontology-supported Approach for Automatic Chaining of Web Services in Geospatial Knowledge Discovery

    Science.gov (United States)

    di, L.; Yue, P.; Yang, W.; Yu, G.

    2006-12-01

    Recent developments in geospatial semantic Web have shown promise for automatic discovery, access, and use of geospatial Web services to quickly and efficiently solve particular application problems. With the semantic Web technology, it is highly feasible to construct intelligent geospatial knowledge systems that can provide answers to many geospatial application questions. A key challenge in constructing such intelligent knowledge system is to automate the creation of a chain or process workflow that involves multiple services and highly diversified data and can generate the answer to a specific question of users. This presentation discusses an approach for automating composition of geospatial Web service chains by employing geospatial semantics described by geospatial ontologies. It shows how ontology-based geospatial semantics are used for enabling the automatic discovery, mediation, and chaining of geospatial Web services. OWL-S is used to represent the geospatial semantics of individual Web services and the type of the services it belongs to and the type of the data it can handle. The hierarchy and classification of service types are described in the service ontology. The hierarchy and classification of data types are presented in the data ontology. For answering users' geospatial questions, an Artificial Intelligent (AI) planning algorithm is used to construct the service chain by using the service and data logics expressed in the ontologies. The chain can be expressed as a graph with nodes representing services and connection weights representing degrees of semantic matching between nodes. The graph is a visual representation of logical geo-processing path for answering users' questions. The graph can be instantiated to a physical service workflow for execution to generate the answer to a user's question. A prototype system, which includes real world geospatial applications, is implemented to demonstrate the concept and approach.

  11. Ontological engineering versus metaphysics

    Science.gov (United States)

    Tataj, Emanuel; Tomanek, Roman; Mulawka, Jan

    2011-10-01

    It has been recognized that ontologies are a semantic version of world wide web and can be found in knowledge-based systems. A recent time survey of this field also suggest that practical artificial intelligence systems may be motivated by this research. Especially strong artificial intelligence as well as concept of homo computer can also benefit from their use. The main objective of this contribution is to present and review already created ontologies and identify the main advantages which derive such approach for knowledge management systems. We would like to present what ontological engineering borrows from metaphysics and what a feedback it can provide to natural language processing, simulations and modelling. The potential topics of further development from philosophical point of view is also underlined.

  12. Building a semi-automatic ontology learning and construction system for geosciences

    Science.gov (United States)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  13. Application of the Financial Industry Business Ontology (FIBO) for development of a financial organization ontology

    Science.gov (United States)

    Petrova, G. G.; Tuzovsky, A. F.; Aksenova, N. V.

    2017-01-01

    The article considers an approach to a formalized description and meaning harmonization for financial terms and means of semantic modeling. Ontologies for the semantic models are described with the help of special languages developed for the Semantic Web. Results of FIBO application to solution of different tasks in the Russian financial sector are given.

  14. Inferring ontology graph structures using OWL reasoning

    KAUST Repository

    Rodriguez-Garcia, Miguel Angel

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  15. Inferring ontology graph structures using OWL reasoning.

    Science.gov (United States)

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  16. BioSWR--semantic web services registry for bioinformatics.

    Directory of Open Access Journals (Sweden)

    Dmitry Repchevsky

    Full Text Available Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL. Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL. BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  17. BioSWR--semantic web services registry for bioinformatics.

    Science.gov (United States)

    Repchevsky, Dmitry; Gelpi, Josep Ll

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  18. Design and development of semantic web-based system for computer science domain-specific information retrieval

    Directory of Open Access Journals (Sweden)

    Ritika Bansal

    2016-09-01

    Full Text Available In semantic web-based system, the concept of ontology is used to search results by contextual meaning of input query instead of keyword matching. From the research literature, there seems to be a need for a tool which can provide an easy interface for complex queries in natural language that can retrieve the domain-specific information from the ontology. This research paper proposes an IRSCSD system (Information retrieval system for computer science domain as a solution. This system offers advanced querying and browsing of structured data with search results automatically aggregated and rendered directly in a consistent user-interface, thus reducing the manual effort of users. So, the main objective of this research is design and development of semantic web-based system for integrating ontology towards domain-specific retrieval support. Methodology followed is a piecemeal research which involves the following stages. First Stage involves the designing of framework for semantic web-based system. Second stage builds the prototype for the framework using Protégé tool. Third Stage deals with the natural language query conversion into SPARQL query language using Python-based QUEPY framework. Fourth Stage involves firing of converted SPARQL queries to the ontology through Apache's Jena API to fetch the results. Lastly, evaluation of the prototype has been done in order to ensure its efficiency and usability. Thus, this research paper throws light on framework development for semantic web-based system that assists in efficient retrieval of domain-specific information, natural language query interpretation into semantic web language, creation of domain-specific ontology and its mapping with related ontology. This research paper also provides approaches and metrics for ontology evaluation on prototype ontology developed to study the performance based on accessibility of required domain-related information.

  19. iSemServ: A model-driven approach to developing semantic web services

    CSIR Research Space (South Africa)

    Mtsweni, J

    2014-07-01

    Full Text Available using description languages of choice, such as Web Ontology Language for Services (OWL-S) and Web Application Description Language (WADL). A design science research methodology was employed in conducting the study. The suggested approach was practically...

  20. Fund Finder: A case study of database-to-ontology mapping

    OpenAIRE

    Barrasa Rodríguez, Jesús; Corcho, Oscar; Gómez-Pérez, A.

    2003-01-01

    The mapping between databases and ontologies is a basic problem when trying to "upgrade" deep web content to the semantic web. Our approach suggests the declarative definition of mappings as a way to achieve domain independency and reusability. A specific language (expressive enough to cover some real world mapping situations like lightly structured databases or not 1st normal form ones) is defined for this purpose. Along with this mapping description language, the ODEMapster processor is in ...

  1. BioSWR – Semantic Web Services Registry for Bioinformatics

    Science.gov (United States)

    Repchevsky, Dmitry; Gelpi, Josep Ll.

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118

  2. Context-dependent Reasoning for the Semantic Web

    Directory of Open Access Journals (Sweden)

    Neli P. Zlatareva

    2011-08-01

    Full Text Available Ontologies are the backbone of the emerging Semantic Web, which is envisioned to dramatically improve current web services by extending them with intelligent capabilities such as reasoning and context-awareness. They define a shared vocabulary of common domains accessible to both, humans and computers, and support various types of information management including storage and processing of data. Current ontology languages, which are designed to be decidable to allow for automatic data processing, target simple typed ontologies that are completely and consistently specified. As the size of ontologies and the complexity of web applications grow, the need for more flexible representation and reasoning schemes emerges. This article presents a logical framework utilizing context-dependent rules which are intended to support not fully and/or precisely specified ontologies. A hypothetical application scenario is described to illustrate the type of ontologies targeted, and the type of queries that the presented logical framework is intended to address.

  3. NanoParticle Ontology for Cancer Nanotechnology Research

    Science.gov (United States)

    Thomas, Dennis G.; Pappu, Rohit V.; Baker, Nathan A.

    2010-01-01

    Data generated from cancer nanotechnology research are so diverse and large in volume that it is difficult to share and efficiently use them without informatics tools. In particular, ontologies that provide a unifying knowledge framework for annotating the data are required to facilitate the semantic integration, knowledge-based searching, unambiguous interpretation, mining and inferencing of the data using informatics methods. In this paper, we discuss the design and development of NanoParticle Ontology (NPO), which is developed within the framework of the Basic Formal Ontology (BFO), and implemented in the Ontology Web Language (OWL) using well-defined ontology design principles. The NPO was developed to represent knowledge underlying the preparation, chemical composition, and characterization of nanomaterials involved in cancer research. Public releases of the NPO are available through BioPortal website, maintained by the National Center for Biomedical Ontology. Mechanisms for editorial and governance processes are being developed for the maintenance, review, and growth of the NPO. PMID:20211274

  4. SPARQL Assist language-neutral query composer

    Science.gov (United States)

    2012-01-01

    Background SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. Results We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. Conclusions To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources. PMID:22373327

  5. SPARQL assist language-neutral query composer.

    Science.gov (United States)

    McCarthy, Luke; Vandervalk, Ben; Wilkinson, Mark

    2012-01-25

    SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources.

  6. Expert2OWL: A Methodology for Pattern-Based Ontology Development.

    Science.gov (United States)

    Tahar, Kais; Xu, Jie; Herre, Heinrich

    2017-01-01

    The formalization of expert knowledge enables a broad spectrum of applications employing ontologies as underlying technology. These include eLearning, Semantic Web and expert systems. However, the manual construction of such ontologies is time-consuming and thus expensive. Moreover, experts are often unfamiliar with the syntax and semantics of formal ontology languages such as OWL and usually have no experience in developing formal ontologies. To overcome these barriers, we developed a new method and tool, called Expert2OWL that provides efficient features to support the construction of OWL ontologies using GFO (General Formal Ontology) as a top-level ontology. This method allows a close and effective collaboration between ontologists and domain experts. Essentially, this tool integrates Excel spreadsheets as part of a pattern-based ontology development and refinement process. Expert2OWL enables us to expedite the development process and modularize the resulting ontologies. We applied this method in the field of Chinese Herbal Medicine (CHM) and used Expert2OWL to automatically generate an accurate Chinese Herbology ontology (CHO). The expressivity of CHO was tested and evaluated using ontology query languages SPARQL and DL. CHO shows promising results and can generate answers to important scientific questions such as which Chinese herbal formulas contain which substances, which substances treat which diseases, and which ones are the most frequently used in CHM.

  7. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    Science.gov (United States)

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  8. Integrating reasoning and clinical archetypes using OWL ontologies and SWRL rules.

    Science.gov (United States)

    Lezcano, Leonardo; Sicilia, Miguel-Angel; Rodríguez-Solano, Carlos

    2011-04-01

    Semantic interoperability is essential to facilitate the computerized support for alerts, workflow management and evidence-based healthcare across heterogeneous electronic health record (EHR) systems. Clinical archetypes, which are formal definitions of specific clinical concepts defined as specializations of a generic reference (information) model, provide a mechanism to express data structures in a shared and interoperable way. However, currently available archetype languages do not provide direct support for mapping to formal ontologies and then exploiting reasoning on clinical knowledge, which are key ingredients of full semantic interoperability, as stated in the SemanticHEALTH report [1]. This paper reports on an approach to translate definitions expressed in the openEHR Archetype Definition Language (ADL) to a formal representation expressed using the Ontology Web Language (OWL). The formal representations are then integrated with rules expressed with Semantic Web Rule Language (SWRL) expressions, providing an approach to apply the SWRL rules to concrete instances of clinical data. Sharing the knowledge expressed in the form of rules is consistent with the philosophy of open sharing, encouraged by archetypes. Our approach also allows the reuse of formal knowledge, expressed through ontologies, and extends reuse to propositions of declarative knowledge, such as those encoded in clinical guidelines. This paper describes the ADL-to-OWL translation approach, describes the techniques to map archetypes to formal ontologies, and demonstrates how rules can be applied to the resulting representation. We provide examples taken from a patient safety alerting system to illustrate our approach. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Unintended consequences of existential quantifications in biomedical ontologies

    Directory of Open Access Journals (Sweden)

    Boeker Martin

    2011-11-01

    Full Text Available Abstract Background The Open Biomedical Ontologies (OBO Foundry is a collection of freely available ontologically structured controlled vocabularies in the biomedical domain. Most of them are disseminated via both the OBO Flatfile Format and the semantic web format Web Ontology Language (OWL, which draws upon formal logic. Based on the interpretations underlying OWL description logics (OWL-DL semantics, we scrutinize the OWL-DL releases of OBO ontologies to assess whether their logical axioms correspond to the meaning intended by their authors. Results We analyzed ontologies and ontology cross products available via the OBO Foundry site http://www.obofoundry.org for existential restrictions (someValuesFrom, from which we examined a random sample of 2,836 clauses. According to a rating done by four experts, 23% of all existential restrictions in OBO Foundry candidate ontologies are suspicious (Cohens' κ = 0.78. We found a smaller proportion of existential restrictions in OBO Foundry cross products are suspicious, but in this case an accurate quantitative judgment is not possible due to a low inter-rater agreement (κ = 0.07. We identified several typical modeling problems, for which satisfactory ontology design patterns based on OWL-DL were proposed. We further describe several usability issues with OBO ontologies, including the lack of ontological commitment for several common terms, and the proliferation of domain-specific relations. Conclusions The current OWL releases of OBO Foundry (and Foundry candidate ontologies contain numerous assertions which do not properly describe the underlying biological reality, or are ambiguous and difficult to interpret. The solution is a better anchoring in upper ontologies and a restriction to relatively few, well defined relation types with given domain and range constraints.

  10. FOCIH: Form-Based Ontology Creation and Information Harvesting

    Science.gov (United States)

    Tao, Cui; Embley, David W.; Liddle, Stephen W.

    Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data—which some see as Web 3.0—is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.

  11. CelOWS: an ontology based framework for the provision of semantic web services related to biological models.

    Science.gov (United States)

    Matos, Ely Edison; Campos, Fernanda; Braga, Regina; Palazzi, Daniele

    2010-02-01

    The amount of information generated by biological research has lead to an intensive use of models. Mathematical and computational modeling needs accurate description to share, reuse and simulate models as formulated by original authors. In this paper, we introduce the Cell Component Ontology (CelO), expressed in OWL-DL. This ontology captures both the structure of a cell model and the properties of functional components. We use this ontology in a Web project (CelOWS) to describe, query and compose CellML models, using semantic web services. It aims to improve reuse and composition of existent components and allow semantic validation of new models.

  12. Aproximación a una ontología para lenguajes de modelado gráfico

    Directory of Open Access Journals (Sweden)

    Marcela Naranjo

    2009-05-01

    Full Text Available UML, SysML y WebML son lenguajes de modelado gráfico (LMG similares que no se pueden interpretar conjuntamente, pues tienen diferencias en tipos de modelos y diagramas. En la literatura se encuentran técnicas que estudian las características de algunos LMG, pero se aplican sobre lenguajes particulares, sin considerar sus características comunes. En este artículo se propone el diseño e implementación de una ontología que resuma los principales conceptos y relaciones de los LMG, utilizando una metodología creada en la Universidad de Stanford. La ontología desarrollada responde 35 preguntas de competencia, de las cuales algunas se ejemplifican en el artículo./ UML, SysML, and WebML are graphical modeling languages (GML. Despite their similarities, these languages can not be jointly interpreted, since they exhibit different kinds of models and diagrams. Some studies for examining the features of some GML are proposed in the state of the art, but applied to individual languages, avoiding the common features among such languages. In this paper, we propose an ontology design and implementation for summarizing GML concepts and relations. We use a methodology created in the Stanford University. The developed ontology can successfully answer 35 competence questions, some of them exemplified in this paper.

  13. Towards Agile Ontology Maintenance

    Science.gov (United States)

    Luczak-Rösch, Markus

    Ontologies are an appropriate means to represent knowledge on the Web. Research on ontology engineering reached practices for an integrative lifecycle support. However, a broader success of ontologies in Web-based information systems remains unreached while the more lightweight semantic approaches are rather successful. We assume, paired with the emerging trend of services and microservices on the Web, new dynamic scenarios gain momentum in which a shared knowledge base is made available to several dynamically changing services with disparate requirements. Our work envisions a step towards such a dynamic scenario in which an ontology adapts to the requirements of the accessing services and applications as well as the user's needs in an agile way and reduces the experts' involvement in ontology maintenance processes.

  14. The MMI Device Ontology: Enabling Sensor Integration

    Science.gov (United States)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    The Marine Metadata Interoperability (MMI) project has developed an ontology for devices to describe sensors and sensor networks. This ontology is implemented in the W3C Web Ontology Language (OWL) and provides an extensible conceptual model and controlled vocabularies for describing heterogeneous instrument types, with different data characteristics, and their attributes. It can help users populate metadata records for sensors; associate devices with their platforms, deployments, measurement capabilities and restrictions; aid in discovery of sensor data, both historic and real-time; and improve the interoperability of observational oceanographic data sets. We developed the MMI Device Ontology following a community-based approach. By building on and integrating other models and ontologies from related disciplines, we sought to facilitate semantic interoperability while avoiding duplication. Key concepts and insights from various communities, including the Open Geospatial Consortium (eg., SensorML and Observations and Measurements specifications), Semantic Web for Earth and Environmental Terminology (SWEET), and W3C Semantic Sensor Network Incubator Group, have significantly enriched the development of the ontology. Individuals ranging from instrument designers, science data producers and consumers to ontology specialists and other technologists contributed to the work. Applications of the MMI Device Ontology are underway for several community use cases. These include vessel-mounted multibeam mapping sonars for the Rolling Deck to Repository (R2R) program and description of diverse instruments on deepwater Ocean Reference Stations for the OceanSITES program. These trials involve creation of records completely describing instruments, either by individual instances or by manufacturer and model. Individual terms in the MMI Device Ontology can be referenced with their corresponding Uniform Resource Identifiers (URIs) in sensor-related metadata specifications (e

  15. Automatic geospatial information Web service composition based on ontology interface matching

    Science.gov (United States)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  16. An Ontological Approach to Developing Information Operations Applications for use on the Semantic Web

    National Research Council Canada - National Science Library

    Clarke, Timothy L

    2008-01-01

    .... By expressing IO capabilities in a formal ontology suitable for use on the Semantic Web, conditions are set such that computational power can more efficiently be leveraged to better define required...

  17. An Ontological Approach to Developing Information Operations Applications for Use on the Semantic Web

    National Research Council Canada - National Science Library

    Clarke, Timothy L

    2008-01-01

    .... By expressing IO capabilities in a formal ontology suitable for use on the Semantic Web, conditions are set such that computational power can more efficiently be leveraged to better define required...

  18. OPPL-Galaxy, a Galaxy tool for enhancing ontology exploitation as part of bioinformatics workflows

    Science.gov (United States)

    2013-01-01

    Background Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses. PMID:23286517

  19. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  20. Web Approach for Ontology-Based Classification, Integration, and Interdisciplinary Usage of Geoscience Metadata

    Directory of Open Access Journals (Sweden)

    B Ritschel

    2012-10-01

    Full Text Available The Semantic Web is a W3C approach that integrates the different sources of semantics within documents and services using ontology-based techniques. The main objective of this approach in the geoscience domain is the improvement of understanding, integration, and usage of Earth and space science related web content in terms of data, information, and knowledge for machines and people. The modeling and representation of semantic attributes and relations within and among documents can be realized by human readable concept maps and machine readable OWL documents. The objectives for the usage of the Semantic Web approach in the GFZ data center ISDC project are the design of an extended classification of metadata documents for product types related to instruments, platforms, and projects as well as the integration of different types of metadata related to data product providers, users, and data centers. Sources of content and semantics for the description of Earth and space science product types and related classes are standardized metadata documents (e.g., DIF documents, publications, grey literature, and Web pages. Other sources are information provided by users, such as tagging data and social navigation information. The integration of controlled vocabularies as well as folksonomies plays an important role in the design of well formed ontologies.

  1. Building an ontology of pulmonary diseases with natural language processing tools using textual corpora.

    Science.gov (United States)

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2007-01-01

    Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.

  2. Advancing data reuse in phyloinformatics using an ontology-driven Semantic Web approach.

    Science.gov (United States)

    Panahiazar, Maryam; Sheth, Amit P; Ranabahu, Ajith; Vos, Rutger A; Leebens-Mack, Jim

    2013-01-01

    Phylogenetic analyses can resolve historical relationships among genes, organisms or higher taxa. Understanding such relationships can elucidate a wide range of biological phenomena, including, for example, the importance of gene and genome duplications in the evolution of gene function, the role of adaptation as a driver of diversification, or the evolutionary consequences of biogeographic shifts. Phyloinformaticists are developing data standards, databases and communication protocols (e.g. Application Programming Interfaces, APIs) to extend the accessibility of gene trees, species trees, and the metadata necessary to interpret these trees, thus enabling researchers across the life sciences to reuse phylogenetic knowledge. Specifically, Semantic Web technologies are being developed to make phylogenetic knowledge interpretable by web agents, thereby enabling intelligently automated, high-throughput reuse of results generated by phylogenetic research. This manuscript describes an ontology-driven, semantic problem-solving environment for phylogenetic analyses and introduces artefacts that can promote phyloinformatic efforts to promote accessibility of trees and underlying metadata. PhylOnt is an extensible ontology with concepts describing tree types and tree building methodologies including estimation methods, models and programs. In addition we present the PhylAnt platform for annotating scientific articles and NeXML files with PhylOnt concepts. The novelty of this work is the annotation of NeXML files and phylogenetic related documents with PhylOnt Ontology. This approach advances data reuse in phyloinformatics.

  3. An Ontology Design Pattern for Surface Water Features

    Energy Technology Data Exchange (ETDEWEB)

    Sinha, Gaurav [Ohio University; Mark, David [University at Buffalo (SUNY); Kolas, Dave [Raytheon BBN Technologies; Varanka, Dalia [U.S. Geological Survey, Rolla, MO; Romero, Boleslo E [University of California, Santa Barbara; Feng, Chen-Chieh [National University of Singapore; Usery, Lynn [U.S. Geological Survey, Rolla, MO; Liebermann, Joshua [Tumbling Walls, LLC; Sorokine, Alexandre [ORNL

    2014-01-01

    Surface water is a primary concept of human experience but concepts are captured in cultures and languages in many different ways. Still, many commonalities can be found due to the physical basis of many of the properties and categories. An abstract ontology of surface water features based only on those physical properties of landscape features has the best potential for serving as a foundational domain ontology. It can then be used to systematically incor-porate concepts that are specific to a culture, language, or scientific domain. The Surface Water ontology design pattern was developed both for domain knowledge distillation and to serve as a conceptual building-block for more complex surface water ontologies. A fundamental distinction is made in this on-tology between landscape features that act as containers (e.g., stream channels, basins) and the bodies of water (e.g., rivers, lakes) that occupy those containers. Concave (container) landforms semantics are specified in a Dry module and the semantics of contained bodies of water in a Wet module. The pattern is imple-mented in OWL, but Description Logic axioms and a detailed explanation is provided. The OWL ontology will be an important contribution to Semantic Web vocabulary for annotating surface water feature datasets. A discussion about why there is a need to complement the pattern with other ontologies, es-pecially the previously developed Surface Network pattern is also provided. Fi-nally, the practical value of the pattern in semantic querying of surface water datasets is illustrated through a few queries and annotated geospatial datasets.

  4. Practical ontologies for information professionals

    CERN Document Server

    AUTHOR|(CDS)2071712

    2016-01-01

    Practical Ontologies for Information Professionals provides an introduction to ontologies and their development, an essential tool for fighting back against information overload. The development of robust and widely used ontologies is an increasingly important tool in the fight against information overload. The publishing and sharing of explicit explanations for a wide variety of conceptualizations, in a machine readable format, has the power to both improve information retrieval and identify new knowledge. This new book provides an accessible introduction to the following: * What is an ontology? Defining the concept and why it is increasingly important to the information professional * Ontologies and the semantic web * Existing ontologies, such as SKOS, OWL, FOAF, schema.org, and the DBpedia Ontology * Adopting and building ontologies, showing how to avoid repetition of work and how to build a simple ontology with Protege * Interrogating semantic web ontologies * The future of ontologies and the role of the ...

  5. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    Directory of Open Access Journals (Sweden)

    Luis Iribarne

    2014-01-01

    Full Text Available Modern Web-based Information Systems (WIS are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS. This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  6. Ontology of a scene based on Java 3D architecture.

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2009-12-01

    Full Text Available The present article seeks to make an approach to the class hierarchy of a scene built with the architecture Java 3D, to develop an ontology of a scene as from the semantic essential components for the semantic structuring of the Web3D. Java was selected because the language recommended by the W3C Consortium for the Development of the Web3D oriented applications as from X3D standard is Xj3D which compositionof their Schemas is based the architecture of Java3D In first instance identifies the domain and scope of the ontology, defining classes and subclasses that comprise from Java3D architecture and the essential elements of a scene, as its point of origin, the field of rotation, translation The limitation of the scene and the definition of shaders, then define the slots that are declared in RDF as a framework for describing the properties of the classes established from identifying thedomain and range of each class, then develops composition of the OWL ontology on SWOOP Finally, be perform instantiations of the ontology building for a Iconosphere object as from class expressions defined.

  7. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  8. Towards a Consistent and Scientifically Accurate Drug Ontology.

    Science.gov (United States)

    Hogan, William R; Hanna, Josh; Joseph, Eric; Brochhausen, Mathias

    2013-01-01

    Our use case for comparative effectiveness research requires an ontology of drugs that enables querying National Drug Codes (NDCs) by active ingredient, mechanism of action, physiological effect, and therapeutic class of the drug products they represent. We conducted an ontological analysis of drugs from the realist perspective, and evaluated existing drug terminology, ontology, and database artifacts from (1) the technical perspective, (2) the perspective of pharmacology and medical science (3) the perspective of description logic semantics (if they were available in Web Ontology Language or OWL), and (4) the perspective of our realism-based analysis of the domain. No existing resource was sufficient. Therefore, we built the Drug Ontology (DrOn) in OWL, which we populated with NDCs and other classes from RxNorm using only content created by the National Library of Medicine. We also built an application that uses DrOn to query for NDCs as outlined above, available at: http://ingarden.uams.edu/ingredients. The application uses an OWL-based description logic reasoner to execute end-user queries. DrOn is available at http://code.google.com/p/dr-on.

  9. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  10. Semantic framework for mapping object-oriented model to semantic web languages.

    Science.gov (United States)

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  11. Toxicology ontology perspectives.

    Science.gov (United States)

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

  12. An Ontology Evolution and Maintenance Model in Web Environment%一种网络环境中的本体演化和维护模型

    Institute of Scientific and Technical Information of China (English)

    王进; 陈恩红; 林乐

    2003-01-01

    With the rapid development of Internet, Knowledge web will be one of development directions of the next generation of Internet. The applications of ontology will bring new opportunities and challenges to it. In this paper, we propose an ontology evolution and maintenance model in the Web environment to solve the conflicts and inconsistencies between domain ontologies and knowledge caused by the changes of the knowledge sources in the Web environment by extracting and analyzing the meta-data of the data sources.

  13. Ontologías y Web Semántica 2005. Resumen del taller

    OpenAIRE

    Corcho, Oscar; Gómez-Pérez, A.; Fernández López, Mariano; Ramos Gargantilla, JA.

    2006-01-01

    En este artículo se resumen los resultados obtenidos en el taller “Ontologías y Web Semántica 2005”, que fue celebrado el día 14 de noviembre de 2005 en Santiago de Compostela, en el contexto de la Conferencia de la Asociación Española para la Inteligencia Artificial (CAEPIA2005).

  14. C2 Domain Ontology within Our Lifetime

    Science.gov (United States)

    2009-06-01

    25] Masolo, C., et al: The WonderWeb Library of Foundational Ontologies Prelimary Report, WonderWeb Deliverable D17, ISTC -CNR, May 2003. [26...www.ifomis.org/bfo/BFO  [25] Masolo, C., et al: The WonderWeb Library of Foundational Ontologies Prelimary Report, WonderWeb Deliverable D17, ISTC -CNR

  15. Ontology-based multi-agent systems

    Energy Technology Data Exchange (ETDEWEB)

    Hadzic, Maja; Wongthongtham, Pornpit; Dillon, Tharam; Chang, Elizabeth [Digital Ecosystems and Business Intelligence Institute, Perth, WA (Australia)

    2009-07-01

    The Semantic web has given a great deal of impetus to the development of ontologies and multi-agent systems. Several books have appeared which discuss the development of ontologies or of multi-agent systems separately on their own. The growing interaction between agents and ontologies has highlighted the need for integrated development of these. This book is unique in being the first to provide an integrated treatment of the modeling, design and implementation of such combined ontology/multi-agent systems. It provides clear exposition of this integrated modeling and design methodology. It further illustrates this with two detailed case studies in (a) the biomedical area and (b) the software engineering area. The book is, therefore, of interest to researchers, graduate students and practitioners in the semantic web and web science area. (orig.)

  16. WebQuests as Language-Learning Tools

    Science.gov (United States)

    Aydin, Selami

    2016-01-01

    This study presents a review of the literature that examines WebQuests as tools for second-language acquisition and foreign language-learning processes to guide teachers in their teaching activities and researchers in further research on the issue. The study first introduces the theoretical background behind WebQuest use in the mentioned…

  17. Developing VISO: Vaccine Information Statement Ontology for patient education.

    Science.gov (United States)

    Amith, Muhammad; Gong, Yang; Cunningham, Rachel; Boom, Julie; Tao, Cui

    2015-01-01

    To construct a comprehensive vaccine information ontology that can support personal health information applications using patient-consumer lexicon, and lead to outcomes that can improve patient education. The authors composed the Vaccine Information Statement Ontology (VISO) using the web ontology language (OWL). We started with 6 Vaccine Information Statement (VIS) documents collected from the Centers for Disease Control and Prevention (CDC) website. Important and relevant selections from the documents were recorded, and knowledge triples were derived. Based on the collection of knowledge triples, the meta-level formalization of the vaccine information domain was developed. Relevant instances and their relationships were created to represent vaccine domain knowledge. The initial iteration of the VISO was realized, based on the 6 Vaccine Information Statements and coded into OWL2 with Protégé. The ontology consisted of 132 concepts (classes and subclasses) with 33 types of relationships between the concepts. The total number of instances from classes totaled at 460, along with 429 knowledge triples in total. Semiotic-based metric scoring was applied to evaluate quality of the ontology.

  18. Communication and Gamification in the Web-Based Foreign Language Educational System: Web- Based Foreign Language Educational System

    Science.gov (United States)

    Osipov, Ilya V.; Volinsky, Alex A.; Nikulchev, Evgeny; Prasikova, Anna Y.

    2016-01-01

    The paper describes development of the educational online web communication platform for teaching and learning foreign languages. The main objective was to develop a web application for teaching foreigners to understand casual fluent speech. The system is based on the time bank principle, allowing users to teach others their native language along…

  19. An empirical study on how the distribution of ontologies affects reasoning on the web

    NARCIS (Netherlands)

    Bazoobandi, Hamid R.; Urbani, Jacopo; van Harmelen, Frank; Bal, Henri

    2017-01-01

    The Web of Data is an inherently distributed environment where ontologies are located in (physically) remote locations and are subject to constant changes. Reasoning is affected by these changes, but the extent and significance of this dependency is not well-studied yet. To address this problem,

  20. BiNChE: a web tool and library for chemical enrichment analysis based on the ChEBI ontology.

    Science.gov (United States)

    Moreno, Pablo; Beisken, Stephan; Harsha, Bhavana; Muthukrishnan, Venkatesh; Tudose, Ilinca; Dekker, Adriano; Dornfeldt, Stefanie; Taruttis, Franziska; Grosse, Ivo; Hastings, Janna; Neumann, Steffen; Steinbeck, Christoph

    2015-02-21

    Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis. We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology. BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.

  1. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  2. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    Science.gov (United States)

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  3. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    Directory of Open Access Journals (Sweden)

    Asma Adala

    2011-01-01

    Full Text Available As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.

  4. Development of an Web Service Architecture for Enterprise Application Integration

    International Nuclear Information System (INIS)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae; Kim, Kyu-Ho; Lee, Dong-Chul

    2007-01-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  5. Using Ontologies in Cybersecurity Field

    Directory of Open Access Journals (Sweden)

    Tiberiu Marian GEORGESCU

    2017-01-01

    Full Text Available This paper is an exploratory research which aims to improve the cybersecurity field by means of semantic web technologies. The authors present a framework which uses Semantic Web technologies to automatically extract and analyse text in natural language available online. The system provides results that are further analysed by cybersecurity experts to detect black hat hackers’ activities. The authors examine several characteristics of how hacking communities communicate and collaborate online and how much information can be obtained by analysing different types of internet text communication channels. Having online sources as input data, the model proposed extracts and analyses natural language that relates with cybersecurity field, with the aid of ontologies. The main objective is to generate information about possible black hat hacking actions, which later can be analysed punctually by experts. This paper describes the data flow of the framework and it proposes technological solutions so that the model can be applied. In their future work, the authors plan to implement the framework described as a system software application.

  6. Exploratory visualization of earth science data in a Semantic Web context

    Science.gov (United States)

    Ma, X.; Fox, P. A.

    2012-12-01

    Earth science data are increasingly unlocked from their local 'safes' and shared online with the global science community as well as the average citizen. The European Union (EU)-funded project OneGeology-Europe (1G-E, www.onegeology-europe.eu) is a typical project that promotes works in that direction. The 1G-E web portal provides easy access to distributed geological data resources across participating EU member states. Similar projects can also be found in other countries or regions, such as the geoscience information network USGIN (www.usgin.org) in United States, the groundwater information network GIN-RIES (www.gw-info.net) in Canada and the earth science infrastructure AuScope (www.auscope.org.au) in Australia. While data are increasingly made available online, we currently face a shortage of tools and services that support information and knowledge discovery with such data. One reason is that earth science data are recorded in professional language and terms, and people without background knowledge cannot understand their meanings well. The Semantic Web provides a new context to help computers as well as users to better understand meanings of data and conduct applications. In this study we aim to chain up Semantic Web technologies (e.g., vocabularies/ontologies and reasoning), data visualization (e.g., an animation underpinned by an ontology) and online earth science data (e.g., available as Web Map Service) to develop functions for information and knowledge discovery. We carried out a case study with data of the 1G-E project. We set up an ontology of geological time scale using the encoding languages of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) from W3C (World Wide Web Consortium, www.w3.org). Then we developed a Flash animation of geological time scale by using the ActionScript language. The animation is underpinned by the ontology and the interrelationships between concepts of geological time scale are visualized in the

  7. Auf dem Weg zum Web 3.0: Taxonomien und Ontologien für die medizinische Ausbildung - eine systematische Literaturrecherche [Towards Web 3.0: Taxonomies and ontologies for medical education - a systematic review

    Directory of Open Access Journals (Sweden)

    Blaum, Wolf E.

    2013-02-01

    Full Text Available [english] Introduction: Both for curricular development and mapping, as well as for orientation within the mounting supply of learning resources in medical education, the Semantic Web ("Web 3.0" poses a low-threshold, effective tool that enables identification of content related items across system boundaries. Replacement of the currently required manual with an automatically generated link, which is based on content and semantics, requires the use of a suitably structured vocabulary for a machine-readable description of object content. Aim of this study is to compile the existing taxonomies and ontologies used for the annotation of medical content and learning resources, to compare those using selected criteria, and to verify their suitability in the context described above.Methods: Based on a systematic literature search, existing taxonomies and ontologies for the description of medical learning resources were identified. Through web searches and/or direct contact with the respective editors, each of the structured vocabularies thus identified were examined in regards to topic, structure, language, scope, maintenance, and technology of the taxonomy/ontology. In addition, suitability for use in the Semantic Web was verified.Results: Among 20 identified publications, 14 structured vocabularies were identified, which differed rather strongly in regards to language, scope, currency, and maintenance. None of the identified vocabularies fulfilled the necessary criteria for content description of medical curricula and learning resources in the German-speaking world. Discussion: While moving towards Web 3.0, a significant problem lies in the selection and use of an appropriate German vocabulary for the machine-readable description of object content. Possible solutions include development, translation and/or combination of existing vocabularies, possibly including partial translations of English vocabularies.[german] Einleitung: Sowohl für die

  8. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    Science.gov (United States)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems

  9. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    OpenAIRE

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Abstract Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to r...

  10. Coupling ontology driven semantic representation with multilingual natural language generation for tuning international terminologies.

    Science.gov (United States)

    Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.

  11. Two obvious intuitions : Ontology-mapping needs background knowledge and approximation

    NARCIS (Netherlands)

    Van Harmelen, Frank

    2007-01-01

    Ontology mapping (or: ontology alignment, or integration) is one of the most active areas the Semantic Web area. An increasing amount of ontologies are becoming available in recent years, and if the Semantic Web is to be taken seriously, the problem of ontology mapping must be solved. Numerous

  12. A Chatbot as a Natural Web Interface to Arabic Web QA

    Directory of Open Access Journals (Sweden)

    Bayan Abu Shawar

    2011-03-01

    Full Text Available In this paper, we describe a way to access Arabic Web Question Answering (QA corpus using a chatbot, without the need for sophisticated natural language processing or logical inference. Any Natural Language (NL interface to Question Answer (QA system is constrained to reply with the given answers, so there is no need for NL generation to recreate well-formed answers, or for deep analysis or logical inference to map user input questions onto this logical ontology; simple (but large set of pattern-template matching rules will suffice. In previous research, this approach works properly with English and other European languages. In this paper, we try to see how the same chatbot will react in terms of Arabic Web QA corpus. Initial results shows that 93% of answers were correct, but because of a lot of characteristics related to Arabic language, changing Arabic questions into other forms may lead to no answers.

  13. F-OWL: An Inference Engine for Semantic Web

    Science.gov (United States)

    Zou, Youyong; Finin, Tim; Chen, Harry

    2004-01-01

    Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.

  14. Constructive Ontology Engineering

    Science.gov (United States)

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  15. Development and Evaluation of an Ontology for Guiding Appropriate Antibiotic Prescribing

    Science.gov (United States)

    Furuya, E. Yoko; Kuperman, Gilad J.; Cimino, James J.; Bakken, Suzanne

    2011-01-01

    Objectives To develop and apply formal ontology creation methods to the domain of antimicrobial prescribing and to formally evaluate the resulting ontology through intrinsic and extrinsic evaluation studies. Methods We extended existing ontology development methods to create the ontology and implemented the ontology using Protégé-OWL. Correctness of the ontology was assessed using a set of ontology design principles and domain expert review via the laddering technique. We created three artifacts to support the extrinsic evaluation (set of prescribing rules, alerts and an ontology-driven alert module, and a patient database) and evaluated the usefulness of the ontology for performing knowledge management tasks to maintain the ontology and for generating alerts to guide antibiotic prescribing. Results The ontology includes 199 classes, 10 properties, and 1,636 description logic restrictions. Twenty-three Semantic Web Rule Language rules were written to generate three prescribing alerts: 1) antibiotic-microorganism mismatch alert; 2) medication-allergy alert; and 3) non-recommended empiric antibiotic therapy alert. The evaluation studies confirmed the correctness of the ontology, usefulness of the ontology for representing and maintaining antimicrobial treatment knowledge rules, and usefulness of the ontology for generating alerts to provide feedback to clinicians during antibiotic prescribing. Conclusions This study contributes to the understanding of ontology development and evaluation methods and addresses one knowledge gap related to using ontologies as a clinical decision support system component—a need for formal ontology evaluation methods to measure their quality from the perspective of their intrinsic characteristics and their usefulness for specific tasks. PMID:22019377

  16. Development of an Web Service Architecture for Enterprise Application Integration

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae [Korea Power Engineering Company, Daejeon (Korea, Republic of); Kim, Kyu-Ho; Lee, Dong-Chul [Korea Electric Power Data Network, Daejeon (Korea, Republic of)

    2007-07-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  17. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    Science.gov (United States)

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  18. Owlready: Ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies.

    Science.gov (United States)

    Lamy, Jean-Baptiste

    2017-07-01

    Ontologies are widely used in the biomedical domain. While many tools exist for the edition, alignment or evaluation of ontologies, few solutions have been proposed for ontology programming interface, i.e. for accessing and modifying an ontology within a programming language. Existing query languages (such as SPARQL) and APIs (such as OWLAPI) are not as easy-to-use as object programming languages are. Moreover, they provide few solutions to difficulties encountered with biomedical ontologies. Our objective was to design a tool for accessing easily the entities of an OWL ontology, with high-level constructs helping with biomedical ontologies. From our experience on medical ontologies, we identified two difficulties: (1) many entities are represented by classes (rather than individuals), but the existing tools do not permit manipulating classes as easily as individuals, (2) ontologies rely on the open-world assumption, whereas the medical reasoning must consider only evidence-based medical knowledge as true. We designed a Python module for ontology-oriented programming. It allows access to the entities of an OWL ontology as if they were objects in the programming language. We propose a simple high-level syntax for managing classes and the associated "role-filler" constraints. We also propose an algorithm for performing local closed world reasoning in simple situations. We developed Owlready, a Python module for a high-level access to OWL ontologies. The paper describes the architecture and the syntax of the module version 2. It details how we integrated the OWL ontology model with the Python object model. The paper provides examples based on Gene Ontology (GO). We also demonstrate the interest of Owlready in a use case focused on the automatic comparison of the contraindications of several drugs. This use case illustrates the use of the specific syntax proposed for manipulating classes and for performing local closed world reasoning. Owlready has been successfully

  19. Verification of product design using regulation knowledge base and Web services

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ik June [KAERI, Daejeon (Korea, Republic of); Lee, Jae Chul; Mun Du Hwan [Kyungpook National University, Daegu (Korea, Republic of); Kim, Byung Chul [Dong-A University, Busan (Korea, Republic of); Hwang, Jin Sang [PartDB Co., Ltd., Daejeom (Korea, Republic of); Lim, Chae Ho [Korea Institute of Industrial Technology, Incheon (Korea, Republic of)

    2015-11-15

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  20. Verification of product design using regulation knowledge base and Web services

    International Nuclear Information System (INIS)

    Kim, Ik June; Lee, Jae Chul; Mun Du Hwan; Kim, Byung Chul; Hwang, Jin Sang; Lim, Chae Ho

    2015-01-01

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  1. An ontology design pattern for surface water features

    Science.gov (United States)

    Sinha, Gaurav; Mark, David; Kolas, Dave; Varanka, Dalia; Romero, Boleslo E.; Feng, Chen-Chieh; Usery, E. Lynn; Liebermann, Joshua; Sorokine, Alexandre

    2014-01-01

    Surface water is a primary concept of human experience but concepts are captured in cultures and languages in many different ways. Still, many commonalities exist due to the physical basis of many of the properties and categories. An abstract ontology of surface water features based only on those physical properties of landscape features has the best potential for serving as a foundational domain ontology for other more context-dependent ontologies. The Surface Water ontology design pattern was developed both for domain knowledge distillation and to serve as a conceptual building-block for more complex or specialized surface water ontologies. A fundamental distinction is made in this ontology between landscape features that act as containers (e.g., stream channels, basins) and the bodies of water (e.g., rivers, lakes) that occupy those containers. Concave (container) landforms semantics are specified in a Dry module and the semantics of contained bodies of water in a Wet module. The pattern is implemented in OWL, but Description Logic axioms and a detailed explanation is provided in this paper. The OWL ontology will be an important contribution to Semantic Web vocabulary for annotating surface water feature datasets. Also provided is a discussion of why there is a need to complement the pattern with other ontologies, especially the previously developed Surface Network pattern. Finally, the practical value of the pattern in semantic querying of surface water datasets is illustrated through an annotated geospatial dataset and sample queries using the classes of the Surface Water pattern.

  2. Semantic web data warehousing for caGrid.

    Science.gov (United States)

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  3. Geographic Ontologies, Gazetteers and Multilingualism

    Directory of Open Access Journals (Sweden)

    Robert Laurini

    2015-01-01

    Full Text Available Different languages imply different visions of space, so that terminologies are different in geographic ontologies. In addition to their geometric shapes, geographic features have names, sometimes different in diverse languages. In addition, the role of gazetteers, as dictionaries of place names (toponyms, is to maintain relations between place names and location. The scope of geographic information retrieval is to search for geographic information not against a database, but against the whole Internet: but the Internet stores information in different languages, and it is of paramount importance not to remain stuck to a unique language. In this paper, our first step is to clarify the links between geographic objects as computer representations of geographic features, ontologies and gazetteers designed in various languages. Then, we propose some inference rules for matching not only types, but also relations in geographic ontologies with the assistance of gazetteers.

  4. Autonomous Mission Operations for Sensor Webs

    Science.gov (United States)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC

  5. Web service composition languages: old wine in new bottles?

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Dumas, M.; Hofstede, ter A.H.M.; Chroust, G.; Hofer, C.

    2003-01-01

    Recently, several languages for Web service composition have emerged (e.g., BPEL4WS and WSCI). The goal of these languages is to glue Web services together in a process-oriented way. For this purpose, these languages typically borrow concepts from workflow management systems and embed these concepts

  6. Language in Web Communication

    DEFF Research Database (Denmark)

    Toft, Birthe

    2012-01-01

    Having taught and carried out research in LSP and business communication for many years, I have come across, again and again, the problems arising from the inferior status of language in the business environment. Being convinced that it does not have to be so, instead of going on trying to convince...... non-linguistically trained colleagues of the importance of language via the usual arguments, I suggest that we let them experience the problems arising from the non-recognition of the importance of language via a Web communication crash course, inspired by a course taught to BA students...

  7. Developing an Ontology-Based Rollover Monitoring and Decision Support System for Engineering Vehicles

    Directory of Open Access Journals (Sweden)

    Feixiang Xu

    2018-05-01

    Full Text Available The increasing number of rollover accidents of engineering vehicles has attracted close attention; however, most researchers focus on the analysis and monitoring of rollover stability indexes and seldom the assessment and decision support for the rollover risk of engineering vehicles. In this context, an ontology-based rollover monitoring and decision support system for engineering vehicles is proposed. The ontology model is built for representing monitored rollover stability data with semantic properties and for constructing semantic relevance among the various concepts involved in the rollover domain. On the basis of this, ontology querying and reasoning methods based on the Simple Protocol and RDF Query Language (SPARQL and Semantic Web Rule Language (SWRL rules are utilized to realize the rollover risk assessment and to obtain suggested measures. PC and mobile applications (APPs have also been developed to implement the above methods. In addition, five sets of rollover stability data for an articulated off-road engineering vehicle under different working conditions were analyzed to verify the accuracy and effectiveness of the proposed system.

  8. ER2OWL: Generating OWL Ontology from ER Diagram

    Science.gov (United States)

    Fahad, Muhammad

    Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.

  9. Practical experiences for the development of educational sys-tems in the semantic web

    Directory of Open Access Journals (Sweden)

    Mª del Mar Sánchez Vera

    2013-01-01

    Full Text Available Semantic Web technologies have been applied in educational settings for different purposes in recent years, with the type of application being mainly defined by the way in which knowledge is represented and exploited. The basic technology for knowledge representation in Semantic Web settings is the ontology, which represents a common, shareable and reusable view of a particular application domain. Ontologies can support different activities in educational settings such as organizing course contents, classifying learning objects or assessing learning levels. Consequently, ontologies can become a very useful tool from a pedagogical perspective. This paper focuses on two different experiences where Semantic Web technologies are used in educational settings, the difference between them lying in how knowledge is obtained and represented. On the one hand, the OeLE platform uses ontologies as a support for assessment processes. Such ontologies have to be designed and implemented in semantic languages apt to be used by OeLE. On the other hand, the ENSEMBLE project pursues the development of semantic web applications by creating specific knowledge representations drawn from user needs. Our paper is consequently going to offer an in-depth analysis of the role played by ontologies, showing how they can be used in different ways drawing a comparison between model patterns and examining the ways in which they can complement each other as well as their practical implications

  10. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  11. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  12. Semantic web technologies for enterprise 2.0

    CERN Document Server

    Passant, A

    2010-01-01

    In this book, we detail different theories, methods and implementations combining Web 2.0 paradigms and Semantic Web technologies in Enterprise environments. After introducing those terms, we present the current shortcomings of tools such as blogs and wikis as well as tagging practices in an Enterprise 2.0 context. We define the SemSLATES methodology and the global vision of a middleware architecture based on Semantic Web technologies and Linked Data principles (languages, models, tools and protocols) to solve these issues. Then, we detail the various ontologies that we build to achieve this g

  13. Aber-OWL: a framework for ontology-based data access in biology

    KAUST Repository

    Hoehndorf, Robert; Slater, Luke; Schofield, Paul N; Gkoutos, Georgios V

    2015-01-01

    these ontologies relies on the use of automated reasoning. Results: We have developed the Aber-OWL infrastructure that provides reasoning services for bio-ontologies. Aber-OWL consists of an ontology repository, a set of web services and web interfaces that enable

  14. Reconciliation of ontology and terminology to cope with linguistics.

    Science.gov (United States)

    Baud, Robert H; Ceusters, Werner; Ruch, Patrick; Rassinoux, Anne-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    To discuss the relationships between ontologies, terminologies and language in the context of Natural Language Processing (NLP) applications in order to show the negative consequences of confusing them. The viewpoints of the terminologist and (computational) linguist are developed separately, and then compared, leading to the presentation of reconciliation among these points of view, with consideration of the role of the ontologist. In order to encourage appropriate usage of terminologies, guidelines are presented advocating the simultaneous publication of pragmatic vocabularies supported by terminological material based on adequate ontological analysis. Ontologies, terminologies and natural languages each have their own purpose. Ontologies support machine understanding, natural languages support human communication, and terminologies should form the bridge between them. Therefore, future terminology standards should be based on sound ontology and do justice to the diversities in natural languages. Moreover, they should support local vocabularies, in order to be easily adaptable to local needs and practices.

  15. Where to Publish and Find Ontologies? A Survey of Ontology Libraries

    Science.gov (United States)

    d'Aquin, Mathieu; Noy, Natalya F.

    2011-01-01

    One of the key promises of the Semantic Web is its potential to enable and facilitate data interoperability. The ability of data providers and application developers to share and reuse ontologies is a critical component of this data interoperability: if different applications and data sources use the same set of well defined terms for describing their domain and data, it will be much easier for them to “talk” to one another. Ontology libraries are the systems that collect ontologies from different sources and facilitate the tasks of finding, exploring, and using these ontologies. Thus ontology libraries can serve as a link in enabling diverse users and applications to discover, evaluate, use, and publish ontologies. In this paper, we provide a survey of the growing—and surprisingly diverse—landscape of ontology libraries. We highlight how the varying scope and intended use of the libraries a ects their features, content, and potential exploitation in applications. From reviewing eleven ontology libraries, we identify a core set of questions that ontology practitioners and users should consider in choosing an ontology library for finding ontologies or publishing their own. We also discuss the research challenges that emerge from this survey, for the developers of ontology libraries to address. PMID:22408576

  16. Towards exergaming commons: composing the exergame ontology for publishing open game data.

    Science.gov (United States)

    Bamparopoulos, Giorgos; Konstantinidis, Evdokimos; Bratsas, Charalampos; Bamidis, Panagiotis D

    2016-01-01

    It has been shown that exergames have multiple benefits for physical, mental and cognitive health. Only recently, however, researchers have started considering them as health monitoring tools, through collection and analysis of game metrics data. In light of this and initiatives like the Quantified Self, there is an emerging need to open the data produced by health games and their associated metrics in order for them to be evaluated by the research community in an attempt to quantify their potential health, cognitive and physiological benefits. We have developed an ontology that describes exergames using the Web Ontology Language (OWL); it is available at http://purl.org/net/exergame/ns#. After an investigation of key components of exergames, relevant ontologies were incorporated, while necessary classes and properties were defined to model these components. A JavaScript framework was also developed in order to apply the ontology to online exergames. Finally, a SPARQL Endpoint is provided to enable open data access to potential clients through the web. Exergame components include details for players, game sessions, as well as, data produced during these game-playing sessions. The description of the game includes elements such as goals, game controllers and presentation hardware used; what is more, concepts from already existing ontologies are reused/repurposed. Game sessions include information related to the player, the date and venue where the game was played, as well as, the results/scores that were produced/achieved. These games are subsequently played by 14 users in multiple game sessions and the results derived from these sessions are published in a triplestore as open data. We model concepts related to exergames by providing a standardized structure for reference and comparison. This is the first work that publishes data from actual exergame sessions on the web, facilitating the integration and analysis of the data, while allowing open data access through

  17. Mainstream web standards now support science data too

    Science.gov (United States)

    Richard, S. M.; Cox, S. J. D.; Janowicz, K.; Fox, P. A.

    2017-12-01

    The science community has developed many models and ontologies for representation of scientific data and knowledge. In some cases these have been built as part of coordinated frameworks. For example, the biomedical communities OBO Foundry federates applications covering various aspects of life sciences, which are united through reference to a common foundational ontology (BFO). The SWEET ontology, originally developed at NASA and now governed through ESIP, is a single large unified ontology for earth and environmental sciences. On a smaller scale, GeoSciML provides a UML and corresponding XML representation of geological mapping and observation data. Some of the key concepts related to scientific data and observations have recently been incorporated into domain-neutral mainstream ontologies developed by the World Wide Web consortium through their Spatial Data on the Web working group (SDWWG). OWL-Time has been enhanced to support temporal reference systems needed for science, and has been deployed in a linked data representation of the International Chronostratigraphic Chart. The Semantic Sensor Network ontology has been extended to cover samples and sampling, including relationships between samples. Gridded data and time-series is supported by applications of the statistical data-cube ontology (QB) for earth observations (the EO-QB profile) and spatio-temporal data (QB4ST). These standard ontologies and encodings can be used directly for science data, or can provide a bridge to specialized domain ontologies. There are a number of advantages in alignment with the W3C standards. The W3C vocabularies use discipline-neutral language and thus support cross-disciplinary applications directly without complex mappings. The W3C vocabularies are already aligned with the core ontologies that are the building blocks of the semantic web. The W3C vocabularies are each tightly scoped thus encouraging good practices in the combination of complementary small ontologies. The W3C

  18. The eXtensible ontology development (XOD) principles and tool implementation to support ontology interoperability.

    Science.gov (United States)

    He, Yongqun; Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; Overton, James A; Ong, Edison

    2018-01-12

    Ontologies are critical to data/metadata and knowledge standardization, sharing, and analysis. With hundreds of biological and biomedical ontologies developed, it has become critical to ensure ontology interoperability and the usage of interoperable ontologies for standardized data representation and integration. The suite of web-based Ontoanimal tools (e.g., Ontofox, Ontorat, and Ontobee) support different aspects of extensible ontology development. By summarizing the common features of Ontoanimal and other similar tools, we identified and proposed an "eXtensible Ontology Development" (XOD) strategy and its associated four principles. These XOD principles reuse existing terms and semantic relations from reliable ontologies, develop and apply well-established ontology design patterns (ODPs), and involve community efforts to support new ontology development, promoting standardized and interoperable data and knowledge representation and integration. The adoption of the XOD strategy, together with robust XOD tool development, will greatly support ontology interoperability and robust ontology applications to support data to be Findable, Accessible, Interoperable and Reusable (i.e., FAIR).

  19. Core Semantics for Public Ontologies

    National Research Council Canada - National Science Library

    Suni, Niranjan

    2005-01-01

    ... (schemas or ontologies) with respect to objects. The DARPA Agent Markup Language (DAML) through the use of ontologies provides a very powerful way to describe objects and their relationships to other objects...

  20. Querying phenotype-genotype relationships on patient datasets using semantic web technology: the example of Cerebrotendinous xanthomatosis.

    Science.gov (United States)

    Taboada, María; Martínez, Diego; Pilo, Belén; Jiménez-Escrig, Adriano; Robinson, Peter N; Sobrido, María J

    2012-07-31

    Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms

  1. Modularising ontology and designing inference patterns to personalise health condition assessment: the case of obesity.

    Science.gov (United States)

    Sojic, Aleksandra; Terkaj, Walter; Contini, Giorgia; Sacco, Marco

    2016-05-04

    The public health initiatives for obesity prevention are increasingly exploiting the advantages of smart technologies that can register various kinds of data related to physical, physiological, and behavioural conditions. Since individual features and habits vary among people, the design of appropriate intervention strategies for motivating changes in behavioural patterns towards a healthy lifestyle requires the interpretation and integration of collected information, while considering individual profiles in a personalised manner. The ontology-based modelling is recognised as a promising approach in facing the interoperability and integration of heterogeneous information related to characterisation of personal profiles. The presented ontology captures individual profiles across several obesity-related knowledge-domains structured into dedicated modules in order to support inference about health condition, physical features, behavioural habits associated with a person, and relevant changes over time. The modularisation strategy is designed to facilitate ontology development, maintenance, and reuse. The domain-specific modules formalised in the Web Ontology Language (OWL) integrate the domain-specific sets of rules formalised in the Semantic Web Rule Language (SWRL). The inference rules follow a modelling pattern designed to support personalised assessment of health condition as age- and gender-specific. The test cases exemplify a personalised assessment of the obesity-related health conditions for the population of teenagers. The paper addresses several issues concerning the modelling of normative concepts related to obesity and depicts how the public health concern impacts classification of teenagers according to their phenotypes. The modelling choices regarding the ontology-structure are explained in the context of the modelling goal to integrate multiple knowledge-domains and support reasoning about the individual changes over time. The presented modularisation

  2. Supporting Analogy-based Effort Estimation with the Use of Ontologies

    Directory of Open Access Journals (Sweden)

    Joanna Kowalska

    2014-06-01

    Full Text Available The paper concerns effort estimation of software development projects, in particular, at the level of product delivery stages. It proposes a new approach to model project data to support expert-supervised analogy-based effort estimation. The data is modeled using Semantic Web technologies, such as Resource Description Framework (RDF and Ontology Language for the Web (OWL. Moreover, in the paper, we define a method of supervised case-based reasoning. The method enables to search for similar projects’ tasks at different levels of abstraction. For instance, instead of searching for a task performed by a specific person, one could look for tasks performed by people with similar capabilities. The proposed method relies on ontology that defines the core concepts and relationships. However, it is possible to introduce new classes and relationships, without the need of altering the search mechanisms. Finally, we implemented a prototype tool that was used to preliminary validate the proposed approach. We observed that the proposed approach could potentially help experts in estimating non-trivial tasks that are often underestimated.

  3. The Development of Ontology from Multiple Databases

    Science.gov (United States)

    Kasim, Shahreen; Aswa Omar, Nurul; Fudzee, Mohd Farhan Md; Azhar Ramli, Azizul; Aizi Salamat, Mohamad; Mahdin, Hairulnizam

    2017-08-01

    The area of halal industry is the fastest growing global business across the world. The halal food industry is thus crucial for Muslims all over the world as it serves to ensure them that the food items they consume daily are syariah compliant. Currently, ontology has been widely used in computer sciences area such as web on the heterogeneous information processing, semantic web, and information retrieval. However, ontology has still not been used widely in the halal industry. Today, Muslim community still have problem to verify halal status for products in the market especially foods consisting of E number. This research tried to solve problem in validating the halal status from various halal sources. There are various chemical ontology from multilple databases found to help this ontology development. The E numbers in this chemical ontology are codes for chemicals that can be used as food additives. With this E numbers ontology, Muslim community could identify and verify the halal status effectively for halal products in the market.

  4. Ontology Based Queries - Investigating a Natural Language Interface

    NARCIS (Netherlands)

    van der Sluis, Ielka; Hielkema, F.; Mellish, C.; Doherty, G.

    2010-01-01

    In this paper we look at what may be learned from a comparative study examining non-technical users with a background in social science browsing and querying metadata. Four query tasks were carried out with a natural language interface and with an interface that uses a web paradigm with hyperlinks.

  5. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  6. Sample ontology, GOstat and ontology term enrichment - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM....biosciencedbc.jp/archive/fantom5/datafiles/LATEST/extra/Ontology/ File size: 1.8 MB Simple search URL - Dat...t Us Sample ontology, GOstat and ontology term enrichment - FANTOM5 | LSDB Archive ...

  7. Flavours of XChange, a Rule-Based Reactive Language for the (Semantic) Web

    OpenAIRE

    Bailey, James; Bry, François; Eckert, Michael; Patrânjan, Paula Lavinia

    2005-01-01

    This article introduces XChange, a rule-based reactive language for the Web. Stressing application scenarios, it first argues that high-level reactive languages are needed for bothWeb and SemanticWeb applications. Then, it discusses technologies and paradigms relevant to high-level reactive languages for the (Semantic) Web. Finally, it presents the Event-Condition-Action rules of XChange.

  8. Design of Ontology-Based Sharing Mechanism for Web Services Recommendation Learning Environment

    Science.gov (United States)

    Chen, Hong-Ren

    The number of digital learning websites is growing as a result of advances in computer technology and new techniques in web page creation. These sites contain a wide variety of information but may be a source of confusion to learners who fail to find the information they are seeking. This has led to the concept of recommendation services to help learners acquire information and learning resources that suit their requirements. Learning content like this cannot be reused by other digital learning websites. A successful recommendation service that satisfies a certain learner must cooperate with many other digital learning objects so that it can achieve the required relevance. The study proposes using the theory of knowledge construction in ontology to make the sharing and reuse of digital learning resources possible. The learning recommendation system is accompanied by the recommendation of appropriate teaching materials to help learners enhance their learning abilities. A variety of diverse learning components scattered across the Internet can be organized through an ontological process so that learners can use information by storing, sharing, and reusing it.

  9. Ontology-driven education: Teaching anatomy with intelligent 3D games on the web

    Science.gov (United States)

    Nilsen, Trond

    Human anatomy is a challenging and intimidating subject whose understanding is essential to good medical practice, taught primarily using a combination of lectures and the dissection of human cadavers. Lectures are cheap and scalable, but do a poor job of teaching spatial understanding, whereas dissection lets students experience the body's interior first-hand, but is expensive, cannot be repeated, and is often imperfect. Educational games and online learning activities have the potential to supplement these teaching methods in a cheap and relatively effective way, but they are difficult for educators to customize for particular curricula and lack the tutoring support that human instructors provide. I present an approach to the creation of learning activities for anatomy called ontology-driven education, in which the Foundational Model of Anatomy, an ontological representation of knowledge about anatomy, is leveraged to generate educational content, model student knowledge, and support learning activities and games in a configurable web-based educational framework for anatomy.

  10. A Dynamic Defense Modeling and Simulation Methodology using Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Kangsun Lee

    2010-04-01

    Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.

  11. DMTO: a realistic ontology for standard diabetes mellitus treatment.

    Science.gov (United States)

    El-Sappagh, Shaker; Kwak, Daehan; Ali, Farman; Kwak, Kyung-Sup

    2018-02-06

    Treatment of type 2 diabetes mellitus (T2DM) is a complex problem. A clinical decision support system (CDSS) based on massive and distributed electronic health record data can facilitate the automation of this process and enhance its accuracy. The most important component of any CDSS is its knowledge base. This knowledge base can be formulated using ontologies. The formal description logic of ontology supports the inference of hidden knowledge. Building a complete, coherent, consistent, interoperable, and sharable ontology is a challenge. This paper introduces the first version of the newly constructed Diabetes Mellitus Treatment Ontology (DMTO) as a basis for shared-semantics, domain-specific, standard, machine-readable, and interoperable knowledge relevant to T2DM treatment. It is a comprehensive ontology and provides the highest coverage and the most complete picture of coded knowledge about T2DM patients' current conditions, previous profiles, and T2DM-related aspects, including complications, symptoms, lab tests, interactions, treatment plan (TP) frameworks, and glucose-related diseases and medications. It adheres to the design principles recommended by the Open Biomedical Ontologies Foundry and is based on ontological realism that follows the principles of the Basic Formal Ontology and the Ontology for General Medical Science. DMTO is implemented under Protégé 5.0 in Web Ontology Language (OWL) 2 format and is publicly available through the National Center for Biomedical Ontology's BioPortal at http://bioportal.bioontology.org/ontologies/DMTO . The current version of DMTO includes more than 10,700 classes, 277 relations, 39,425 annotations, 214 semantic rules, and 62,974 axioms. We provide proof of concept for this approach to modeling TPs. The ontology is able to collect and analyze most features of T2DM as well as customize chronic TPs with the most appropriate drugs, foods, and physical exercises. DMTO is ready to be used as a knowledge base for

  12. Ontological problems of contemporary linguistics

    Directory of Open Access Journals (Sweden)

    А В Бондаренко

    2009-03-01

    Full Text Available The article studies linguistic ontology problems such as evolution of essential-existential views of language, interrelation within Being-Language-Man triad, linguistics gnosiological principles, language essence localization, and «expression» as language metalinguistic unit as well as architectonics of language personality et alia.

  13. Ontology-based Information Retrieval

    DEFF Research Database (Denmark)

    Styltsvig, Henrik Bulskov

    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information...... retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use......, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun...

  14. Reviewing the design of DAML+OIL : An ontology language for the Semantic Web

    NARCIS (Netherlands)

    Horrocks, Ian; Patel-Schneider, Peter F.; Van Harmelen, Frank

    2002-01-01

    In the current "Syntactic Web", uninterpreted syntactic constructs are given meaning only by private off-line agreements that are inaccessible to computers. In the Semantic Web vision, this is replaced by a web where both data and its semantic definition are accessible and manipulable by computer

  15. Data ontology and an information system realization for web-based management of image measurements

    Directory of Open Access Journals (Sweden)

    Dimiter eProdanov

    2011-11-01

    Full Text Available Image acquisition, processing and quantification of objects (morphometry require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination ontologies can reduce metadata data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i the process of storage, annotation and querying of image measurements and (ii to provide means for data sharing with 3rd party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations and ImageJ settings. Integration of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.

  16. An ontology model for nursing narratives with natural language generation technology.

    Science.gov (United States)

    Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung

    2013-01-01

    The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.

  17. Adding question answering to an e-tutor for programming languages

    Science.gov (United States)

    Taylor, Kate; Moore, Simon

    Control over a closed domain of textual material removes many question answering issues, as does an ontology that is closely intertwined with its sources. This pragmatic, shallow approach to many challenging areas of research in adaptive hypermedia, question answering, intelligent tutoring and humancomputer interaction has been put into practice at Cambridge in the Computer Science undergraduate course to teach the hardware description language Veri/og. This language itself poses many challenges as it crosses the interdisciplinary boundary between hardware and software engineers, giving rise to severalhuman ontologies as well as theprogramming language itself We present further results from ourformal and informal surveys. We look at further work to increase the dialogue between studentand tutor and export our knowledge to the Semantic Web.

  18. Survey on Ontology Mapping

    Science.gov (United States)

    Zhu, Junwu

    To create a sharable semantic space in which the terms from different domain ontology or knowledge system, Ontology mapping become a hot research point in Semantic Web Community. In this paper, motivated factors of ontology mapping research are given firstly, and then 5 dominating theories and methods, such as information accessing technology, machine learning, linguistics, structure graph and similarity, are illustrated according their technology class. Before we analyses the new requirements and takes a long view, the contributions of these theories and methods are summarized in details. At last, this paper suggest to design a group of semantic connector with the ability of migration learning for OWL-2 extended with constrains and the ontology mapping theory of axiom, so as to provide a new methodology for ontology mapping.

  19. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  20. Challenges for Rule Systems on the Web

    Science.gov (United States)

    Hu, Yuh-Jong; Yeh, Ching-Long; Laun, Wolfgang

    The RuleML Challenge started in 2007 with the objective of inspiring the issues of implementation for management, integration, interoperation and interchange of rules in an open distributed environment, such as the Web. Rules are usually classified as three types: deductive rules, normative rules, and reactive rules. The reactive rules are further classified as ECA rules and production rules. The study of combination rule and ontology is traced back to an earlier active rule system for relational and object-oriented (OO) databases. Recently, this issue has become one of the most important research problems in the Semantic Web. Once we consider a computer executable policy as a declarative set of rules and ontologies that guides the behavior of entities within a system, we have a flexible way to implement real world policies without rewriting the computer code, as we did before. Fortunately, we have de facto rule markup languages, such as RuleML or RIF to achieve the portability and interchange of rules for different rule systems. Otherwise, executing real-life rule-based applications on the Web is almost impossible. Several commercial or open source rule engines are available for the rule-based applications. However, we still need a standard rule language and benchmark for not only to compare the rule systems but also to measure the progress in the field. Finally, a number of real-life rule-based use cases will be investigated to demonstrate the applicability of current rule systems on the Web.

  1. Ontology construction and application in practice case study of health tourism in Thailand.

    Science.gov (United States)

    Chantrapornchai, Chantana; Choksuchat, Chidchanok

    2016-01-01

    Ontology is one of the key components in semantic webs. It contains the core knowledge for an effective search. However, building ontology requires the carefully-collected knowledge which is very domain-sensitive. In this work, we present the practice of ontology construction for a case study of health tourism in Thailand. The whole process follows the METHONTOLOGY approach, which consists of phases: information gathering, corpus study, ontology engineering, evaluation, publishing, and the application construction. Different sources of data such as structure web documents like HTML and other documents are acquired in the information gathering process. The tourism corpora from various tourism texts and standards are explored. The ontology is evaluated in two aspects: automatic reasoning using Pellet, and RacerPro, and the questionnaires, used to evaluate by experts of the domains: tourism domain experts and ontology experts. The ontology usability is demonstrated via the semantic web application and via example axioms. The developed ontology is actually the first health tourism ontology in Thailand with the published application.

  2. ONTOLOGY-DRIVEN TOOL FOR UTILIZING PROGRAMMING STYLES

    Directory of Open Access Journals (Sweden)

    Nikolay Sidorov

    2017-07-01

    Full Text Available Activities of a programmer will be more effective and the software will be more understandable when within the process of software development, programming styles (standards are used, providing clarity of software texts. Purpose: In this research, we present the tool for the realization of new ontology-based methodology automated reasoning techniques for utilizing programming styles. In particular, we focus on representing programming styles in the form of formal ontologies, and study how description logic reasoner can assist programmers in utilizing programming standards. Our research hypothesis is as follows: ontological representation of programming styles can provide additional benefits over existing approaches in utilizing programmer of programming standards. Our research goal is to develop a tool to support the ontology-based utilizing programming styles. Methods: ontological representation of programming styles; object-oriented programming; ontology-driven utilizing of programming styles. Results: the architecture was obtained and the tool was developed in the Java language, which provide tool support of ontology-driven programming styles application method. On the example of naming of the Java programming language standard, features of implementation and application of the tool are provided. Discussion: application of programming styles in coding of program; lack of automated tools for the processes of programming standards application; tool based on new method of ontology-driven application of programming styles; an example of the implementation of tool architecture for naming rules of the Java language standard.

  3. BiOSS: A system for biomedical ontology selection.

    Science.gov (United States)

    Martínez-Romero, Marcos; Vázquez-Naya, José M; Pereira, Javier; Pazos, Alejandro

    2014-04-01

    In biomedical informatics, ontologies are considered a key technology for annotating, retrieving and sharing the huge volume of publicly available data. Due to the increasing amount, complexity and variety of existing biomedical ontologies, choosing the ones to be used in a semantic annotation problem or to design a specific application is a difficult task. As a consequence, the design of approaches and tools addressed to facilitate the selection of biomedical ontologies is becoming a priority. In this paper we present BiOSS, a novel system for the selection of biomedical ontologies. BiOSS evaluates the adequacy of an ontology to a given domain according to three different criteria: (1) the extent to which the ontology covers the domain; (2) the semantic richness of the ontology in the domain; (3) the popularity of the ontology in the biomedical community. BiOSS has been applied to 5 representative problems of ontology selection. It also has been compared to existing methods and tools. Results are promising and show the usefulness of BiOSS to solve real-world ontology selection problems. BiOSS is openly available both as a web tool and a web service. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Domain Specific Languages for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus

    This dissertation shows how domain specific languages may be applied to the domain of interactive Web services to obtain flexible, safe, and efficient solutions. We show how each of four key aspects of interactive Web services involving sessions, dynamic creation of HTML/XML documents, form field......, , that supports virtually all aspects of the development of interactive Web services and provides flexible, safe, and efficient solutions....

  5. Ontology Assisted Formal Specification Extraction from Text

    Directory of Open Access Journals (Sweden)

    Andreea Mihis

    2010-12-01

    Full Text Available In the field of knowledge processing, the ontologies are the most important mean. They make possible for the computer to understand better the natural language and to make judgments. In this paper, a method which use ontologies in the semi-automatic extraction of formal specifications from a natural language text is proposed.

  6. Versioning System for Distributed Ontology Development

    Science.gov (United States)

    2016-03-15

    Framework for Grid Computing and Semantic Web Services,” Trust Management, Springer Berlin Heidelberg (2004), pp. 16−26. [TIME] W3C, “Time Ontology in...Distributed Ontology Development S.K. Damodaran 15 March 2016 This material is based on work supported by the Assistant Secretary of Defense for...Distributed Ontology Development S.K. Damodaran Formerly Group 59 15 March 2016 Massachusetts Institute of Technology Lincoln Laboratory

  7. CONCEPTION OF ONTOLOGY-BASED SECTOR EDUCATIONAL SPACE

    Directory of Open Access Journals (Sweden)

    V. I. Khabarov

    2014-09-01

    Full Text Available PurposeThe aim of the research is to demonstrate the need for the Conception of Ontology-based Sector Educational Space. This Conception could become the basis for the integration of transport sector university information resources into the open virtual network information resource and global educational space. Its content will be presented by standardized ontology-based knowledge packages for educational programs in Russian and English languages.MethodologyComplex-based, ontological, content-based approaches and scientific principles of interdisciplinarity and standardization of knowledge are suggested as the methodological basis of the research. ResultsThe Conception of Ontology-based Sector Educational Space (railway transport, the method of the development of knowledge packages as ontologies in Russian and English languages, the Russian-English Transport Glossary as a separate ontology are among the expected results of the project implementation.Practical implicationsThe Conception could become the basis for the open project to establish the common resource center for transport universities (railway transport. The Conception of ontology-based sector educational space (railway transport could be adapted to the activity of universities of other economic sectors.

  8. An Iterative and Incremental Approach for E-Learning Ontology Engineering

    Directory of Open Access Journals (Sweden)

    Sudath Rohitha Heiyanthuduwage

    2009-03-01

    Full Text Available Abstract - There is a boost in the interest on ontology with the developments in Semantic Web technologies. Ontologies play a vital role in semantic web. Even though there is lot of work done on ontology, still a standard framework for ontology engineering has not been defined. Even though current ontology engineering methodologies are available they need improvements. The effort of our work is to integrate various methods, techniques, tools and etc to different stages of proposed ontology engineering life cycle to create a comprehensive framework for ontology engineering. Current methodologies discuss ontology engineering stages and collaborative environments with user collaboration. However, discussion on increasing effectiveness and correct inference has been given less attention. More over, these methodologies provide little discussion on usability of domain ontologies. We consider these aspects as more important in our work. Also, ontology engineering has been done for various domains and for various purposes. Our effort is to propose an iterative and incremental approach for ontology engineering especially for e-learning domain with the intention of achieving a higher usability and effectiveness of e-learning systems. This paper introduces different aspects of the proposed ontology engineering framework and evaluation of it.

  9. The use of ontologies in the spatial planning domain

    CSIR Research Space (South Africa)

    Kaczmarek, I

    2011-07-01

    Full Text Available are based on RDF (Resource Description Framework) and RDFS (Resource Description Framework Schema) model. A CLOSER LOOK The description of datasets and the objects contained therein using ontologies is a way of representing knowledge about space. Having... of these data. The added value here is that knowledge can be derived. INTRODUCTION TO ONTOLOGIES Ontologies are considered to be a core element of the Semantic Web, which is defined as ?the extension of the World Wide Web that enables people to share content...

  10. Language Learning: The Merge of Teletandem and Web 2.0 Tools

    Science.gov (United States)

    Abreu-Ellis, Carla; Ellis, Jason Brent; Carle, Abbie; Blevens, Jared; Decker, Aline; Carvalho, Leticia; Macedo, Patricia

    2013-01-01

    The following action research provides an overview of student's perceptions of the incorporation of Web 2.0 technologies into in-tandem language learning activities. American and Brazilian college students were partnered in order to work in-tandem through pre-determined language activities using Web 2.0 technologies to learn a second language,…

  11. Virtual Learning Spaces Creation Based on the Systematic Population of an Ontology

    Directory of Open Access Journals (Sweden)

    Cristiana Araújo

    2018-02-01

    Full Text Available The creation of Learning Spaces on the Web, like the exhibition rooms of virtual museums, supported by an ontology that enables a conceptual navigation over the learning objects exposed, is a hard and complex task but of uttermost importance for the success of the knowledge acquisition process. In our opinion, the creation must be systematic and reusable from case to case, based on the query of the ontology instances that describe the museum assets. We will discuss how the ontology definition drives the way SPARQL (SPARQL Protocol and RDF Query Language queries extract information from the TripleStore to be prepared for visualization. However, to enable this approach, we need to populate the ontology in an automatic way, extracting the data from the annotated documents in the institution repository. We intend to show how that process can be implemented using the Museum of the Person (MP as a case-study, describing the XML2RDF tool developed. To illustrate the complete approach proposed we will include a guided visit to the exhibition rooms of MP created according to that proposal and by our tools.

  12. NCBO Ontology Recommender 2.0: an enhanced approach for biomedical ontology recommendation.

    Science.gov (United States)

    Martínez-Romero, Marcos; Jonquet, Clement; O'Connor, Martin J; Graybeal, John; Pazos, Alejandro; Musen, Mark A

    2017-06-07

    and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available (both via the user interface at http://bioportal.bioontology.org/recommender , and via a Web service API).

  13. ADO: a disease ontology representing the domain knowledge specific to Alzheimer's disease.

    Science.gov (United States)

    Malhotra, Ashutosh; Younesi, Erfan; Gündel, Michaela; Müller, Bernd; Heneka, Michael T; Hofmann-Apitius, Martin

    2014-03-01

    Biomedical ontologies offer the capability to structure and represent domain-specific knowledge semantically. Disease-specific ontologies can facilitate knowledge exchange across multiple disciplines, and ontology-driven mining approaches can generate great value for modeling disease mechanisms. However, in the case of neurodegenerative diseases such as Alzheimer's disease, there is a lack of formal representation of the relevant knowledge domain. Alzheimer's disease ontology (ADO) is constructed in accordance to the ontology building life cycle. The Protégé OWL editor was used as a tool for building ADO in Ontology Web Language format. ADO was developed with the purpose of containing information relevant to four main biological views-preclinical, clinical, etiological, and molecular/cellular mechanisms-and was enriched by adding synonyms and references. Validation of the lexicalized ontology by means of named entity recognition-based methods showed a satisfactory performance (F score = 72%). In addition to structural and functional evaluation, a clinical expert in the field performed a manual evaluation and curation of ADO. Through integration of ADO into an information retrieval environment, we show that the ontology supports semantic search in scientific text. The usefulness of ADO is authenticated by dedicated use case scenarios. Development of ADO as an open ADO is a first attempt to organize information related to Alzheimer's disease in a formalized, structured manner. We demonstrate that ADO is able to capture both established and scattered knowledge existing in scientific text. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  14. Semantic similarity between ontologies at different scales

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qingpeng; Haglin, David J.

    2016-04-01

    In the past decade, existing and new knowledge and datasets has been encoded in different ontologies for semantic web and biomedical research. The size of ontologies is often very large in terms of number of concepts and relationships, which makes the analysis of ontologies and the represented knowledge graph computational and time consuming. As the ontologies of various semantic web and biomedical applications usually show explicit hierarchical structures, it is interesting to explore the trade-offs between ontological scales and preservation/precision of results when we analyze ontologies. This paper presents the first effort of examining the capability of this idea via studying the relationship between scaling biomedical ontologies at different levels and the semantic similarity values. We evaluate the semantic similarity between three Gene Ontology slims (Plant, Yeast, and Candida, among which the latter two belong to the same kingdom—Fungi) using four popular measures commonly applied to biomedical ontologies (Resnik, Lin, Jiang-Conrath, and SimRel). The results of this study demonstrate that with proper selection of scaling levels and similarity measures, we can significantly reduce the size of ontologies without losing substantial detail. In particular, the performance of Jiang-Conrath and Lin are more reliable and stable than that of the other two in this experiment, as proven by (a) consistently showing that Yeast and Candida are more similar (as compared to Plant) at different scales, and (b) small deviations of the similarity values after excluding a majority of nodes from several lower scales. This study provides a deeper understanding of the application of semantic similarity to biomedical ontologies, and shed light on how to choose appropriate semantic similarity measures for biomedical engineering.

  15. Uso de ontologías y web semántica para apoyar la gestión del conocimiento

    Directory of Open Access Journals (Sweden)

    Ana Efigenia Sandoval Cantor

    2007-01-01

    Full Text Available Los servicios web semánticos son aplicaciones componentes de la Web 2.0, se consolidan como una evolución de los servicios web tradicionales, al estar enriquecidos con descrip-tores que permiten especializar las búsquedas de información, según el signifi cado y enfo-cadas en las necesidades del usuario. Las ontologías son fundamentos semánticos que tienen algunas aplicaciones prácticas desarrolladas en las ciencias de la salud y la fi losofía, especialmente.Los Servicios Web se han consolidado como una tecnología esencial para la cooperación en Internet, pero requieren mecanismos para su integración, estableciéndose como he-rramienta tecnológica que contribuya a la globalización y gestión del conocimiento en las actividades organizacionales o en el campo de la investigación, con servicios que mejoren los tiempos de respuesta a los usuarios, en términos de búsquedas efi cientes y rápidas. Esta Web extendida se apoya en lenguajes universales como la lógica descriptiva, los agentes inteligentes y las ontologías, resolviendo las carencias semánticas que hoy hacen difícil y dispendioso el acceso a la información en Internet.Mientras los estándares relacionados con la composición o colaboración de Servicios Web dan un primer acercamiento, su convergencia con las tecnologías actuales, ofrecen un gran panorama hacia la obtención de un entorno en el cual la búsqueda, efectividad y eje-cución de servicios sea completamente automatizada. De acuerdo a lo anterior, la Web Semántica es una Web perfeccionada, dotada de mayor signifi cado con el cual, cualquier usuario en Internet podrá encontrar respuestas a sus pre-guntas de forma más rápida y sencilla, gracias a una mejor defi nición de la informació

  16. Ontology-aided Data Fusion (Invited)

    Science.gov (United States)

    Raskin, R.

    2009-12-01

    An ontology provides semantic descriptions that are analogous to those in a dictionary, but are readable by both computers and humans. A data or service is semantically annotated when it is formally associated with elements of an ontology. The ESIP Federation Semantic Web Cluster has developed a set of ontologies to describe datatypes and data services that can be used to support automated data fusion. The service ontology includes descriptors of the service function, its inputs/outputs, and its invocation method. The datatype descriptors resemble typical metadata fields (data format, data model, data structure, originator, etc.) augmented with descriptions of the meaning of the data. These ontologies, in combination with the SWEET science ontology, enable a registered data fusion service to be chained together and implemented that is scientifically meaningful based on machine understanding of the associated data and services. This presentation describes initial results and experiences in automated data fusion.

  17. A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.

    Science.gov (United States)

    Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-10-08

    Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting

  18. Conceptual querying through ontologies

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik

    2009-01-01

    is motivated by an obvious need for users to survey huge volumes of objects in query answers. An ontology formalism and a special notion of-instantiated ontology" are introduced. The latter is a structure reflecting the content in the document collection in that; it is a restriction of a general world......We present here ail approach to conceptual querying where the aim is, given a collection of textual database objects or documents, to target an abstraction of the entire database content in terms of the concepts appearing in documents, rather than the documents in the collection. The approach...... knowledge ontology to the concepts instantiated in the collection. The notion of ontology-based similarity is briefly described, language constructs for direct navigation and retrieval of concepts in the ontology are discussed and approaches to conceptual summarization are presented....

  19. Ontological Encoding of GeoSciML and INSPIRE geological standard vocabularies and schemas: application to geological mapping

    Science.gov (United States)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario; Fubelli, Giandomenico; Giardino, Marco

    2016-04-01

    Encoding of geologic knowledge in formal languages is an ambitious task, aiming at the interoperability and organic representation of geological data, and semantic characterization of geologic maps. Initiatives such as GeoScience Markup Language (last version is GeoSciML 4, 2015[1]) and INSPIRE "Data Specification on Geology" (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013[2]), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG[3]) have been promoting information exchange of the geologic knowledge. There have also been limited attempts to encode the knowledge in a machine-readable format, especially in the lithology domain (see e.g. the CGI_Lithology ontology[4]), but a comprehensive ontological model that connect the several knowledge sources is still lacking. This presentation concerns the "OntoGeonous" initiative, which aims at encoding the geologic knowledge, as expressed through the standard vocabularies, schemas and data models mentioned above, through a number of interlinked computational ontologies, based on the languages of the Semantic Web and the paradigm of Linked Open Data. The initiative proceeds in parallel with a concrete case study, concerning the setting up of a synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap" (developed by the CNR Institute of Geosciences and Earth Resources, CNR IGG, Torino), where the description and classification of GeologicUnits has been supported by the modeling and implementation of the ontologies. We have devised a tripartite ontological model called OntoGeonous that consists of: 1) an ontology of the geologic features (in particular, GeologicUnit, GeomorphologicFeature, and GeologicStructure[5], modeled from the definitions and UML schemata of CGI vocabularies[6], GeoScienceML and INSPIRE, and aligned with the Planetary realm of NASA SWEET ontology[7]), 2) an ontology of the Earth materials (as defined by the

  20. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  1. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  2. Learning expressive ontologies

    CERN Document Server

    Völker, J

    2009-01-01

    This publication advances the state-of-the-art in ontology learning by presenting a set of novel approaches to the semi-automatic acquisition, refinement and evaluation of logically complex axiomatizations. It has been motivated by the fact that the realization of the semantic web envisioned by Tim Berners-Lee is still hampered by the lack of ontological resources, while at the same time more and more applications of semantic technologies emerge from fast-growing areas such as e-business or life sciences. Such knowledge-intensive applications, requiring large scale reasoning over complex domai

  3. BioPortal: An Open-Source Community-Based Ontology Repository

    Science.gov (United States)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its

  4. Application of Ontologies for Big Earth Data

    Science.gov (United States)

    Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.

    2014-12-01

    Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know

  5. Ontology-based concept map learning path reasoning system using SWRL rules

    Energy Technology Data Exchange (ETDEWEB)

    Chu, K.-K.; Lee, C.-I. [National Univ. of Tainan, Taiwan (China). Dept. of Computer Science and Information Learning Technology

    2010-08-13

    Concept maps are graphical representations of knowledge. Concept mapping may reduce students' cognitive load and extend simple memory function. The purpose of this study was on the diagnosis of students' concept map learning abilities and the provision of personally constructive advice dependant on their learning path and progress. Ontology is a useful method with which to represent and store concept map information. Semantic web rule language (SWRL) rules are easy to understand and to use as specific reasoning services. This paper discussed the selection of grade 7 lakes and rivers curriculum for which to devise a concept map learning path reasoning service. The paper defined a concept map e-learning ontology and two SWRL semantic rules, and collected users' concept map learning path data to infer implicit knowledge and to recommend the next learning path for users. It was concluded that the designs devised in this study were feasible and advanced and the ontology kept the domain knowledge preserved. SWRL rules identified an abstraction model for inferred properties. Since they were separate systems, they did not interfere with each other, while ontology or SWRL rules were maintained, ensuring persistent system extensibility and robustness. 15 refs., 1 tab., 8 figs.

  6. Ontology of fractures

    Science.gov (United States)

    Zhong, Jian; Aydina, Atilla; McGuinness, Deborah L.

    2009-03-01

    Fractures are fundamental structures in the Earth's crust and they can impact many societal and industrial activities including oil and gas exploration and production, aquifer management, CO 2 sequestration, waste isolation, the stabilization of engineering structures, and assessing natural hazards (earthquakes, volcanoes, and landslides). Therefore, an ontology which organizes the concepts of fractures could help facilitate a sound education within, and communication among, the highly diverse professional and academic community interested in the problems cited above. We developed a process-based ontology that makes explicit specifications about fractures, their properties, and the deformation mechanisms which lead to their formation and evolution. Our ontology emphasizes the relationships among concepts such as the factors that influence the mechanism(s) responsible for the formation and evolution of specific fracture types. Our ontology is a valuable resource with a potential to applications in a number of fields utilizing recent advances in Information Technology, specifically for digital data and information in computers, grids, and Web services.

  7. MultiFarm: A Benchmark for Multilingual Ontology Matching

    NARCIS (Netherlands)

    Meilicke, C.; García-Castro, R.; Freitas, F.; van Hage, W.R.; Montiel-Ponsoda, E.; Ribeiro de Azevedo, R.; Stuckenschmidt, H.; Svab-Zamazal, O.; Svatek, V.; Tamalin, A.; Wang, S.

    2012-01-01

    In this paper we present the MultiFarm dataset, which has been designed as a benchmark for multilingual ontology matching. The MultiFarm dataset is composed of a set of ontologies translated in different languages and the corresponding alignments between these ontologies. It is based on the OntoFarm

  8. The Porifera Ontology (PORO): enhancing sponge systematics with an anatomy ontology.

    Science.gov (United States)

    Thacker, Robert W; Díaz, Maria Cristina; Kerner, Adeline; Vignes-Lebbe, Régine; Segerdell, Erik; Haendel, Melissa A; Mungall, Christopher J

    2014-01-01

    Porifera (sponges) are ancient basal metazoans that lack organs. They provide insight into key evolutionary transitions, such as the emergence of multicellularity and the nervous system. In addition, their ability to synthesize unusual compounds offers potential biotechnical applications. However, much of the knowledge of these organisms has not previously been codified in a machine-readable way using modern web standards. The Porifera Ontology is intended as a standardized coding system for sponge anatomical features currently used in systematics. The ontology is available from http://purl.obolibrary.org/obo/poro.owl, or from the project homepage http://porifera-ontology.googlecode.com/. The version referred to in this manuscript is permanently available from http://purl.obolibrary.org/obo/poro/releases/2014-03-06/. By standardizing character representations, we hope to facilitate more rapid description and identification of sponge taxa, to allow integration with other evolutionary database systems, and to perform character mapping across the major clades of sponges to better understand the evolution of morphological features. Future applications of the ontology will focus on creating (1) ontology-based species descriptions; (2) taxonomic keys that use the nested terms of the ontology to more quickly facilitate species identifications; and (3) methods to map anatomical characters onto molecular phylogenies of sponges. In addition to modern taxa, the ontology is being extended to include features of fossil taxa.

  9. SISTEM ONTOLOGI E-LEARNING BERBASIS SEMANTIC WEB

    Directory of Open Access Journals (Sweden)

    Bernard Renaldy Suteja

    2012-05-01

    Full Text Available E-learning content being a barrier for e-learning is no longer true on today’s Internet. The current concerns are how to effectively annotate and organize the available content (both textual and non-textual to facilitate effective sharing, reusability and customization. In this paper, we explain a component-oriented approach to organize content in an ontology. We also illustrate our 3-tier e-learning content management architecture and relevant interfaces. We use a simple yet intuitive example to successfully demonstrate the current working prototype which is capable of compiling personalized course materials. The e-learning system explained here uses the said ontology.

  10. A Cognitive Support Framework for Ontology Mapping

    Science.gov (United States)

    Falconer, Sean M.; Storey, Margaret-Anne

    Ontology mapping is the key to data interoperability in the semantic web. This problem has received a lot of research attention, however, the research emphasis has been mostly devoted to automating the mapping process, even though the creation of mappings often involve the user. As industry interest in semantic web technologies grows and the number of widely adopted semantic web applications increases, we must begin to support the user. In this paper, we combine data gathered from background literature, theories of cognitive support and decision making, and an observational case study to propose a theoretical framework for cognitive support in ontology mapping tools. We also describe a tool called CogZ that is based on this framework.

  11. An Approach to Folksonomy-Based Ontology Maintenance for Learning Environments

    Science.gov (United States)

    Gasevic, D.; Zouaq, Amal; Torniai, Carlo; Jovanovic, J.; Hatala, Marek

    2011-01-01

    Recent research in learning technologies has demonstrated many promising contributions from the use of ontologies and semantic web technologies for the development of advanced learning environments. In spite of those benefits, ontology development and maintenance remain the key research challenges to be solved before ontology-enhanced learning…

  12. Methodology of decreasing software complexity using ontology

    Science.gov (United States)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  13. Ontology to relational database transformation for web application development and maintenance

    Science.gov (United States)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  14. Ontologies and Formation Spaces for Conceptual ReDesign of Systems

    Directory of Open Access Journals (Sweden)

    J. Bíla

    2005-01-01

    Full Text Available This paper discusses ontologies, methods for developing them and languages for representing them. A special ontology for computational support of the Conceptual ReDesign Process (CRDP is introduced with a simple illustrative example of an application. The ontology denoted as Global context (GLB combines features of general semantic networks and features of UML language. The ontology is task-oriented and domain-oriented, and contains three basic strata – GLBExpl(stratum of Explanation, GLBFAct (stratum of Fields of Activities and GLBEnv (stratum of Environment, with their sub-strata. The ontology has been developed to represent functions of systems and their components in CRDP. The main difference between this ontology and ontologies which have been developed to identify functions (the semantic details in those ontologies must be as deep as possible is in the style of the description of the functions. In the proposed ontology, Formation Spaces were used as lower semantic categories the semantic deepness of which is variable and depends on the actual solution approach of a specialised Conceptual Designer.

  15. Developing a Philippine Cancer Grid. Part 1: Building a Prototype for a Data Retrieval System for Breast Cancer Research Using Medical Ontologies

    Science.gov (United States)

    Coronel, Andrei D.; Saldana, Rafael P.

    Cancer is a leading cause of morbidity and mortality in the Philippines. Developed within the context of a Philippine Cancer Grid, the present study used web development technologies such as PHP, MySQL, and Apache server to build a prototype data retrieval system for breast cancer research that incorporates medical ontologies from the Unified Medical Language System (UMLS).

  16. Ontology mapping specification in description logics for cooperative ...

    African Journals Online (AJOL)

    Furthermore, the resolution of differences among ontologies is necessary to process queries or use web services in distributed heterogeneous environments. Mapping discovery is a key issue to allow efficient resolution of heterogeneity. We develop an architecture for mapping different systems associated with ontologies.

  17. The Relationship between User Expertise and Structural Ontology Characteristics

    Science.gov (United States)

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  18. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....

  19. GFVO: the Genomic Feature and Variation Ontology

    KAUST Repository

    Baran, Joachim; Durgahee, Bibi Sehnaaz Begum; Eilbeck, Karen; Antezana, Erick; Hoehndorf, Robert; Dumontier, Michel

    2015-01-01

    Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology’s GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use.

  20. Process attributes in bio-ontologies

    Directory of Open Access Journals (Sweden)

    Andrade André Q

    2012-08-01

    Full Text Available Abstract Background Biomedical processes can provide essential information about the (mal- functioning of an organism and are thus frequently represented in biomedical terminologies and ontologies, including the GO Biological Process branch. These processes often need to be described and categorised in terms of their attributes, such as rates or regularities. The adequate representation of such process attributes has been a contentious issue in bio-ontologies recently; and domain ontologies have correspondingly developed ad hoc workarounds that compromise interoperability and logical consistency. Results We present a design pattern for the representation of process attributes that is compatible with upper ontology frameworks such as BFO and BioTop. Our solution rests on two key tenets: firstly, that many of the sorts of process attributes which are biomedically interesting can be characterised by the ways that repeated parts of such processes constitute, in combination, an overall process; secondly, that entities for which a full logical definition can be assigned do not need to be treated as primitive within a formal ontology framework. We apply this approach to the challenge of modelling and automatically classifying examples of normal and abnormal rates and patterns of heart beating processes, and discuss the expressivity required in the underlying ontology representation language. We provide full definitions for process attributes at increasing levels of domain complexity. Conclusions We show that a logical definition of process attributes is feasible, though limited by the expressivity of DL languages so that the creation of primitives is still necessary. This finding may endorse current formal upper-ontology frameworks as a way of ensuring consistency, interoperability and clarity.

  1. Ontology - MicrobeDB.jp | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available gzip) consists of some directories (see the following table). Data file File name: ontology....tar.gz File URL: ftp://ftp.biosciencedbc.jp/archive/microbedb/LATEST/ontology.tar.gz File size: 9...he NCBI Taxonomy and INSDC ontology files were obtained from the DDBJ web site. O...ples Data item Description ontology/meo/meo.ttl An ontology for describing organismal habitats (especially focused on microbes). onto...logy/meo/meo_fma_mapping.ttl An ontology mapping files t

  2. Sadra’s ontological hermeneutics and the religious language problem

    Directory of Open Access Journals (Sweden)

    Mohammad Bidhendi

    2016-03-01

    Full Text Available Abstract Investigating the significance and knowledge domain of a holy text is one of the fundamental problems of religious Language. For Sadra the religious language is a cognitive language. He considers the relation between word and meaning in Revelation as a genetic one; he believes that contrary to imagery in literature in revelation there is a univocity between appearance and reality and the descended truths in the existential hierarchies correspond with each other in a vertical relationship. In this domain, Molla Sadra explains philosophically the self-manifestation of the revealed truths in terms of holy text by using the correspondence between the knowledge and language (Kalam with the being in the arc of descent and in the arc of ascent he provides an understanding process of the holy text whose reconstruction forms his model of ontological hermeneutics. The principal problem of this article is the explanation of the significance of religious language and its cognitive domain in Sadra’s hermeneutic model which results from an investigation of this model and effective factors in the process of the holy text understanding. This research has been conducted based on Mafatih al-ghayb and explanation of its philosophical foundations on the basis of the Transcendent Philosophy and one of its most important findings is displaying the role of triad of individual, text and the author in relation together by Sadra’s hermeneutic model and using this model for explaining the cognitive domain of religious language regarding the text and also from text to extra-text. Contrary to the one-dimensional exoteric conventional or esoteric dogmatic and absolutist models in understanding Quran, Mulla Sadra provides a comprehensive multiple coherent model that is based on singular reality and is rooted in the existence hierarchy, existential layers of human being and the interiors of holy text. Due to move around the gradational singular reality, the

  3. An ontology for factors affecting tuberculosis treatment adherence behavior in sub-Saharan Africa.

    Science.gov (United States)

    Ogundele, Olukunle Ayodeji; Moodley, Deshendran; Pillay, Anban W; Seebregts, Christopher J

    2016-01-01

    Adherence behavior is a complex phenomenon influenced by diverse personal, cultural, and socioeconomic factors that may vary between communities in different regions. Understanding the factors that influence adherence behavior is essential in predicting which individuals and communities are at risk of nonadherence. This is necessary for supporting resource allocation and intervention planning in disease control programs. Currently, there is no known concrete and unambiguous computational representation of factors that influence tuberculosis (TB) treatment adherence behavior that is useful for prediction. This study developed a computer-based conceptual model for capturing and structuring knowledge about the factors that influence TB treatment adherence behavior in sub-Saharan Africa (SSA). An extensive review of existing categorization systems in the literature was used to develop a conceptual model that captured scientific knowledge about TB adherence behavior in SSA. The model was formalized as an ontology using the web ontology language. The ontology was then evaluated for its comprehensiveness and applicability in building predictive models. The outcome of the study is a novel ontology-based approach for curating and structuring scientific knowledge of adherence behavior in patients with TB in SSA. The ontology takes an evidence-based approach by explicitly linking factors to published clinical studies. Factors are structured around five dimensions: factor type, type of effect, regional variation, cross-dependencies between factors, and treatment phase. The ontology is flexible and extendable and provides new insights into the nature of and interrelationship between factors that influence TB adherence.

  4. Flexible and Affordable Foreign Language Learning Environment based on Web 2.0 Technologies

    Directory of Open Access Journals (Sweden)

    Christian Guetl

    2013-05-01

    Full Text Available Web technologies and educational platforms have greatly evolved over the past decade. One of the most significant factors contributing to education on the Internet has been the development of Web 2.0 technologies. These technologies, socially interactive in nature, have much to contribute to the area of Computer Assisted Language Leaning. Unfortunately, Web 2.0 technologies for the most part have been used in an ad hoc manner, permitting language learners acquire knowledge through interaction, but not through a more structured manner as these technologies were not developed to help lean languages as such. The goal of our work is to research and develop an environment, which employs Web 2.0 technology plus online language learning tools to provide a more integrated language learning environment. This paper will explore the technologies and provide information about how tools can be better integrated to provide a more productive working environment for language learners. A first working proof of concept based on our approach introduced is promising supporting modern language requirements and first findings and space for improvements are discussed.

  5. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance.

    Science.gov (United States)

    Cui, Licong

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance

  6. A model-driven approach for representing clinical archetypes for Semantic Web environments.

    Science.gov (United States)

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto

    2009-02-01

    The life-long clinical information of any person supported by electronic means configures his Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. There are currently different standards for representing and exchanging EHR information among different systems. In advanced EHR approaches, clinical information is represented by means of archetypes. Most of these approaches use the Archetype Definition Language (ADL) to specify archetypes. However, ADL has some drawbacks when attempting to perform semantic activities in Semantic Web environments. In this work, Semantic Web technologies are used to specify clinical archetypes for advanced EHR architectures. The advantages of using the Ontology Web Language (OWL) instead of ADL are described and discussed in this work. Moreover, a solution combining Semantic Web and Model-driven Engineering technologies is proposed to transform ADL into OWL for the CEN EN13606 EHR architecture.

  7. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    Science.gov (United States)

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.

  8. Labeling for Big Data in radiation oncology: The Radiation Oncology Structures ontology.

    Science.gov (United States)

    Bibault, Jean-Emmanuel; Zapletal, Eric; Rance, Bastien; Giraud, Philippe; Burgun, Anita

    2018-01-01

    Leveraging Electronic Health Records (EHR) and Oncology Information Systems (OIS) has great potential to generate hypotheses for cancer treatment, since they directly provide medical data on a large scale. In order to gather a significant amount of patients with a high level of clinical details, multicenter studies are necessary. A challenge in creating high quality Big Data studies involving several treatment centers is the lack of semantic interoperability between data sources. We present the ontology we developed to address this issue. Radiation Oncology anatomical and target volumes were categorized in anatomical and treatment planning classes. International delineation guidelines specific to radiation oncology were used for lymph nodes areas and target volumes. Hierarchical classes were created to generate The Radiation Oncology Structures (ROS) Ontology. The ROS was then applied to the data from our institution. Four hundred and seventeen classes were created with a maximum of 14 children classes (average = 5). The ontology was then converted into a Web Ontology Language (.owl) format and made available online on Bioportal and GitHub under an Apache 2.0 License. We extracted all structures delineated in our department since the opening in 2001. 20,758 structures were exported from our "record-and-verify" system, demonstrating a significant heterogeneity within a single center. All structures were matched to the ROS ontology before integration into our clinical data warehouse (CDW). In this study we describe a new ontology, specific to radiation oncology, that reports all anatomical and treatment planning structures that can be delineated. This ontology will be used to integrate dosimetric data in the Assistance Publique-Hôpitaux de Paris CDW that stores data from 6.5 million patients (as of February 2017).

  9. Interpreting XML documents via an RDF schema ontology

    NARCIS (Netherlands)

    Klein, Michel

    2002-01-01

    Many business documents are represented in XML. However XML only describes the structure of data, not its meaning. The meaning of data is required for advanced automated processing, as is envisaged in the "Semantic Web". Ontologies are often used to describe the meaning of data items. Many ontology

  10. Using semantic distances for reasoning with inconsistent ontologies

    NARCIS (Netherlands)

    Huang, Zhisheng; van Harmelen, Frank

    2009-01-01

    Re-using and combining multiple ontologies on the Web is bound to lead to inconsistencies between the combined vocabularies. Even many of the ontologies that are in use today turn out to be inconsistent once some of their implicit knowledge is made explicit. However, robust and efficient methods to

  11. Conceptualisation of rights and meta-rule of law for the web of data

    Directory of Open Access Journals (Sweden)

    Pompeu Casanovas

    2015-09-01

    Full Text Available This article deals with some regulatory and legal problems of the Web of Data. Data and metadata are defined. Digital Rights Management (DRM and Rights Expression Languages (REL are introduced. Open Digital Rights Language (ODRL, Licensed Linked Data Resources (LLDR and Creative Commons Licenses are referred. The development of REL by means of Ontology Design Patterns such as LLDR, or Open Licenses sustained by Policy Models such as ODRL, situates the discussion on metadata at the regulatory level. With the development of the Web of Data the Rule of Law needs to evolve to a Meta-Rule of Law, incorporating tools to regulate and monitor the semantic layer of the Web. This means reflecting on the construction of a new public dimension space for the exercise of rights.

  12. A collaborative recommendation framework for ontology evaluation and reuse

    OpenAIRE

    Cantador, Iván; Fernández Sánchez, Miriam; Castells, Pablo

    2006-01-01

    This is an electronic version of the paper presented at the International Workshop on Recommender Systems, held in Riva del Garda on 2006 Ontology evaluation can be defined as assessing the quality and the adequacy of an ontology for being used in a spe-cific context, for a specific goal. Although ontology reuse is being extensively addressed by the Semantic Web community, the lack of appropriate support tools and automatic techniques for the evaluation of certain ontology features are oft...

  13. Combining machine learning and ontological data handling for multi-source classification of nature conservation areas

    Science.gov (United States)

    Moran, Niklas; Nieland, Simon; Tintrup gen. Suntrup, Gregor; Kleinschmit, Birgit

    2017-02-01

    Manual field surveys for nature conservation management are expensive and time-consuming and could be supplemented and streamlined by using Remote Sensing (RS). RS is critical to meet requirements of existing laws such as the EU Habitats Directive (HabDir) and more importantly to meet future challenges. The full potential of RS has yet to be harnessed as different nomenclatures and procedures hinder interoperability, comparison and provenance. Therefore, automated tools are needed to use RS data to produce comparable, empirical data outputs that lend themselves to data discovery and provenance. These issues are addressed by a novel, semi-automatic ontology-based classification method that uses machine learning algorithms and Web Ontology Language (OWL) ontologies that yields traceable, interoperable and observation-based classification outputs. The method was tested on European Union Nature Information System (EUNIS) grasslands in Rheinland-Palatinate, Germany. The developed methodology is a first step in developing observation-based ontologies in the field of nature conservation. The tests show promising results for the determination of the grassland indicators wetness and alkalinity with an overall accuracy of 85% for alkalinity and 76% for wetness.

  14. Semantic Web Approach to Ease Regulation Compliance Checking in Construction Industry

    Directory of Open Access Journals (Sweden)

    Bruno Fies

    2012-09-01

    Full Text Available Regulations in the Building Industry are becoming increasingly complex and involve more than one technical area, covering products, components and project implementations. They also play an important role in ensuring the quality of a building, and to minimize its environmental impact. Control or conformance checking are becoming more complex every day, not only for industrials, but also for organizations charged with assessing the conformity of new products or processes. This paper will detail the approach taken by the CSTB (Centre Scientifique et Technique du Bâtiment in order to simplify this conformance control task. The approach and the proposed solutions are based on semantic web technologies. For this purpose, we first establish a domain-ontology, which defines the main concepts involved and the relationships, including one based on OWL (Web Ontology Language [1]. We rely on SBVR (Semantics of Business Vocabulary and Business Rules [2] and SPARQL (SPARQL Protocol and RDF Query Language [3] to reformulate the regulatory requirements written in natural language, respectively, in a controlled and formal language. We then structure our control process based on expert practices. Each elementary control step is defined as a SPARQL query and assembled into complex control processes “on demand”, according to the component tested and its semantic definition. Finally, we represent in RDF (Resource Description Framework [4] the association between the SBVR rules and SPARQL queries representing the same regulatory constraints.

  15. Publication and Retrieval of Computational Chemical-Physical Data Via the Semantic Web. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, Neil [Chemical Semantics, Inc., Gainesville, FL (United States)

    2017-07-20

    This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giant Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.

  16. Semantic Web-based digital, field and virtual geological

    Science.gov (United States)

    Babaie, H. A.

    2012-12-01

    Digital, field and virtual Semantic Web-based education (SWBE) of geological mapping requires the construction of a set of searchable, reusable, and interoperable digital learning objects (LO) for learners, teachers, and authors. These self-contained units of learning may be text, image, or audio, describing, for example, how to calculate the true dip of a layer from two structural contours or find the apparent dip along a line of section. A collection of multi-media LOs can be integrated, through domain and task ontologies, with mapping-related learning activities and Web services, for example, to search for the description of lithostratigraphic units in an area, or plotting orientation data on stereonet. Domain ontologies (e.g., GeologicStructure, Lithostratigraphy, Rock) represent knowledge in formal languages (RDF, OWL) by explicitly specifying concepts, relations, and theories involved in geological mapping. These ontologies are used by task ontologies that formalize the semantics of computational tasks (e.g., measuring the true thickness of a formation) and activities (e.g., construction of cross section) for all actors to solve specific problems (making map, instruction, learning support, authoring). A SWBE system for geological mapping should also involve ontologies to formalize teaching strategy (pedagogical styles), learner model (e.g., for student performance, personalization of learning), interface (entry points for activities of all actors), communication (exchange of messages among different components and actors), and educational Web services (for interoperability). In this ontology-based environment, actors interact with the LOs through educational servers, that manage (reuse, edit, delete, store) ontologies, and through tools which communicate with Web services to collect resources and links to other tools. Digital geological mapping involves a location-based, spatial organization of geological elements in a set of GIS thematic layers. Each layer

  17. Enhancing English Language Planning Strategy Using a WebQuest Model

    Science.gov (United States)

    Al-Sayed, Rania Kamal Muhammad; Abdel-Haq, Eman Muhammad; El-Deeb, Mervat Abou-Bakr; Ali, Mahsoub Abdel-Sadeq

    2016-01-01

    The present study aimed at developing English language planning strategy of second year distinguished governmental language preparatory school pupils using the a WebQuest model. Fifty participants from second year at Hassan Abu-Bakr Distinguished Governmental Language School at Al-Qanater Al-Khairia (Qalubia Governorate) were randomly assigned…

  18. A histological ontology of the human cardiovascular system.

    Science.gov (United States)

    Mazo, Claudia; Salazar, Liliana; Corcho, Oscar; Trujillo, Maria; Alegre, Enrique

    2017-10-02

    In this paper, we describe a histological ontology of the human cardiovascular system developed in collaboration among histology experts and computer scientists. The histological ontology is developed following an existing methodology using Conceptual Models (CMs) and validated using OOPS!, expert evaluation with CMs, and how accurately the ontology can answer the Competency Questions (CQ). It is publicly available at http://bioportal.bioontology.org/ontologies/HO and https://w3id.org/def/System . The histological ontology is developed to support complex tasks, such as supporting teaching activities, medical practices, and bio-medical research or having natural language interactions.

  19. Towards Self-managed Pervasive Middleware using OWL/SWRL ontologies

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2008-01-01

    Self-management for pervasive middleware is important to realize the Ambient Intelligence vision. In this paper, we present an OWL/SWRL context ontologies based self-management approach for pervasive middleware where OWL ontology is used as means for context modeling. The context ontologies....../SWRL context ontologies based self-management approach with the self-diagnosis in Hydra middleware, using device state machine and other dynamic context information, for example web service calls. The evaluations in terms of extensibility, performance and scalability show that this approach is effective...

  20. Model Driven Engineering with Ontology Technologies

    Science.gov (United States)

    Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva

    Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.

  1. PERSONALIZATION SISTEM E-LEARNING BERBASIS ONTOLOGY

    Directory of Open Access Journals (Sweden)

    Ahmad Ashari

    2010-11-01

    Full Text Available Personalization of Ontology Based E-learning System. Today, a form of technology known as Web 2.0 thatthoroughly supports web-to-web interactions is present. Interactions, such as information sharing in the forms ofdocument sharing (slideshare, picture sharing (flickr, video sharing (youtube, Wikis, and online networking (weblogand web-forum are principally accomodating community empowerment services. These factors cause the appearanceof social interaction through Internet as well as learning interaction and anywhere-anytime training which is recentlycalled e-Learning. Basically, e-Learning needs a self-employed learning method and learning habits that emphasize onthe learner as the most important role. However, e-learning system which is expected to boost the intensity of selfemployedlearning is uncapable to represent the importance. This is proven with the current e-Learning system inIndonesia that only accomodates the delivery of learning materials identical to all active learners, ignores the cognitiveaspects and does not offer any approach or experience of interactive self-learning and disregards the aspect of users’ability to adapt. The proposed e-learning system which is Web 2.0-based utilizes ontology as the representation ofmeaning of knowledge formed by the learner.

  2. A UML profile for the OBO relation ontology

    Science.gov (United States)

    2012-01-01

    Background Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions The use of an established and well-known graphical language in the development of biomedical ontologies provides a more

  3. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  4. Ontology-Driven Translator Generator for Data Display Configurations

    National Research Council Canada - National Science Library

    Jones, Charles

    2004-01-01

    .... In addition, the method includes the specification of mappings between a language-specific ontology and its corresponding syntax specification, that is, either an eXtensible Markup Language (XML...

  5. Gene Ontology Consortium: going forward.

    Science.gov (United States)

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Building a developmental toxicity ontology.

    Science.gov (United States)

    Baker, Nancy; Boobis, Alan; Burgoon, Lyle; Carney, Edward; Currie, Richard; Fritsche, Ellen; Knudsen, Thomas; Laffont, Madeleine; Piersma, Aldert H; Poole, Alan; Schneider, Steffen; Daston, George

    2018-04-03

    As more information is generated about modes of action for developmental toxicity and more data are generated using high-throughput and high-content technologies, it is becoming necessary to organize that information. This report discussed the need for a systematic representation of knowledge about developmental toxicity (i.e., an ontology) and proposes a method to build one based on knowledge of developmental biology and mode of action/ adverse outcome pathways in developmental toxicity. This report is the result of a consensus working group developing a plan to create an ontology for developmental toxicity that spans multiple levels of biological organization. This report provide a description of some of the challenges in building a developmental toxicity ontology and outlines a proposed methodology to meet those challenges. As the ontology is built on currently available web-based resources, a review of these resources is provided. Case studies on one of the most well-understood morphogens and developmental toxicants, retinoic acid, are presented as examples of how such an ontology might be developed. This report outlines an approach to construct a developmental toxicity ontology. Such an ontology will facilitate computer-based prediction of substances likely to induce human developmental toxicity. © 2018 Wiley Periodicals, Inc.

  7. Methodology to build medical ontology from textual resources.

    Science.gov (United States)

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2006-01-01

    In the medical field, it is now established that the maintenance of unambiguous thesauri goes through ontologies. Our research task is to help pneumologists code acts and diagnoses with a software that represents medical knowledge through a domain ontology. In this paper, we describe our general methodology aimed at knowledge engineers in order to build various types of medical ontologies based on terminology extraction from texts. The hypothesis is to apply natural language processing tools to textual patient discharge summaries to develop the resources needed to build an ontology in pneumology. Results indicate that the joint use of distributional analysis and lexico-syntactic patterns performed satisfactorily for building such ontologies.

  8. An ontology for major histocompatibility restriction.

    Science.gov (United States)

    Vita, Randi; Overton, James A; Seymour, Emily; Sidney, John; Kaufman, Jim; Tallmadge, Rebecca L; Ellis, Shirley; Hammond, John; Butcher, Geoff W; Sette, Alessandro; Peters, Bjoern

    2016-01-01

    MHC molecules are a highly diverse family of proteins that play a key role in cellular immune recognition. Over time, different techniques and terminologies have been developed to identify the specific type(s) of MHC molecule involved in a specific immune recognition context. No consistent nomenclature exists across different vertebrate species. To correctly represent MHC related data in The Immune Epitope Database (IEDB), we built upon a previously established MHC ontology and created an ontology to represent MHC molecules as they relate to immunological experiments. This ontology models MHC protein chains from 16 species, deals with different approaches used to identify MHC, such as direct sequencing verses serotyping, relates engineered MHC molecules to naturally occurring ones, connects genetic loci, alleles, protein chains and multi-chain proteins, and establishes evidence codes for MHC restriction. Where available, this work is based on existing ontologies from the OBO foundry. Overall, representing MHC molecules provides a challenging and practically important test case for ontology building, and could serve as an example of how to integrate other ontology building efforts into web resources.

  9. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    Science.gov (United States)

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  10. Initial implementation of a comparative data analysis ontology.

    Science.gov (United States)

    Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin

    2009-07-03

    Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  11. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    Science.gov (United States)

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  12. Ontology Based Quality Evaluation for Spatial Data

    Science.gov (United States)

    Yılmaz, C.; Cömert, Ç.

    2015-08-01

    Many institutions will be providing data to the National Spatial Data Infrastructure (NSDI). Current technical background of the NSDI is based on syntactic web services. It is expected that this will be replaced by semantic web services. The quality of the data provided is important in terms of the decision-making process and the accuracy of transactions. Therefore, the data quality needs to be tested. This topic has been neglected in Turkey. Data quality control for NSDI may be done by private or public "data accreditation" institutions. A methodology is required for data quality evaluation. There are studies for data quality including ISO standards, academic studies and software to evaluate spatial data quality. ISO 19157 standard defines the data quality elements. Proprietary software such as, 1Spatial's 1Validate and ESRI's Data Reviewer offers quality evaluation based on their own classification of rules. Commonly, rule based approaches are used for geospatial data quality check. In this study, we look for the technical components to devise and implement a rule based approach with ontologies using free and open source software in semantic web context. Semantic web uses ontologies to deliver well-defined web resources and make them accessible to end-users and processes. We have created an ontology conforming to the geospatial data and defined some sample rules to show how to test data with respect to data quality elements including; attribute, topo-semantic and geometrical consistency using free and open source software. To test data against rules, sample GeoSPARQL queries are created, associated with specifications.

  13. Benchmarking ontologies: bigger or better?

    Directory of Open Access Journals (Sweden)

    Lixia Yao

    2011-01-01

    Full Text Available A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1 four of the most common medical ontologies with respect to a corpus of medical documents and (2 seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.

  14. An ontology for Autism Spectrum Disorder (ASD) to infer ASD phenotypes from Autism Diagnostic Interview-Revised data.

    Science.gov (United States)

    Mugzach, Omri; Peleg, Mor; Bagley, Steven C; Guter, Stephen J; Cook, Edwin H; Altman, Russ B

    2015-08-01

    Our goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data. Knowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data. We extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94. The ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy. The ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology. Copyright

  15. The ontology-based answers (OBA) service: a connector for embedded usage of ontologies in applications.

    Science.gov (United States)

    Dönitz, Jürgen; Wingender, Edgar

    2012-01-01

    The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and "partOf" relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba.

  16. Learning Ontology from Object-Relational Database

    Directory of Open Access Journals (Sweden)

    Kaulins Andrejs

    2015-12-01

    Full Text Available This article describes a method of transformation of object-relational model into ontology. The offered method uses learning rules for such complex data types as object tables and collections – arrays of a variable size, as well as nested tables. Object types and their transformation into ontologies are insufficiently considered in scientific literature. This fact served as motivation for the authors to investigate this issue and to write the article on this matter. In the beginning, we acquaint the reader with complex data types and object-oriented databases. Then we describe an algorithm of transformation of complex data types into ontologies. At the end of the article, some examples of ontologies described in the OWL language are given.

  17. BAIK– PROGRAMMING LANGUAGE BASED ON INDONESIAN LEXICAL PARSING FOR MULTITIER WEB DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Haris Hasanudin

    2012-05-01

    Full Text Available Business software development with global team is increasing rapidly and the programming language as development tool takes the important role in the global web development. The real user friendly programming language should be written in local language for programmer who has native language is not in English. This paper presents our design of BAIK (Bahasa Anak Indonesia untuk Komputerscripting language which syntax is modeled with Bahasa Indonesian for multitier web development. Researcher propose the implementation of Indonesian Parsing Engine and Binary Search Tree structure for memory allocation of variable and compose the language features that support basic Object Oriented Programming, Common Gateway Interface, HTML style manipulation and database connection. Our goal is to build real programming language from simple structure design for web development using Indonesian lexical words. Pengembangan bisnis perangkat lunak dalam tim berskala global meningkat dengan cepat dan bahasa pemrograman berperan penting dalam pengembangan web secara global. Bahasa pemrograman yang benar-benar ramah terhadap pengguna harus ditulis dalam bahasa lokal programmer yang bahasa ibunya bukan Bahasa Inggris. Paper ini menyajikan desain dari bahasa penulisan BAIK (Bahasa Anak Indonesia untuk Komputer, yang sintaksisnya dimodelkan dengan Bahasa Indonesia untuk pengembangan web multitier. Peneliti mengusulkan implementasi dari parsing engine Bahasa Indonesia dan struktur binary search tree untuk alokasi memori terhadap variabel, serta membuat fitur bahasa yang mendukung dasar pemrograman berbasis objek, common gateway interface, manipulasi gaya HTML, dan koneksi basis data. Tujuan penelitian ini adalah untuk menciptakan bahasa pemrograman yang sesungguhnya dan menggunakan desain struktur sederhana untuk pengembangan web dengan menggunakan kata-kata dari Bahasa Indonesia.

  18. Ontological Annotation with WordNet

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob; Hohimer, Ryan E.; White, Amanda M.

    2006-06-06

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  19. Ontology Matching with Semantic Verification.

    Science.gov (United States)

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  20. Framework comparativo de lenguajes de representación de ontologías

    OpenAIRE

    Tagni, Gastón; Roger, Sandra

    2004-01-01

    La web semántica es el siguiente estado de la web en su evolución. Es un concepto nuevo para muchos investigadores y desarrolladores que desde hace un tiempo estan trabajando para crear un red semántica de conocimiento que sea provechosa para las personas que la usen. La web semántica está soportada por un concepto denominado Ontología, tomado de la Inteligencia Artificial. Las Ontologías permiten definir el conocimiento de un do- minio específico de forma tal que este pueda ser entendido ...

  1. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  2. Turning to Ontology in STS? Turning to STS through ‘Ontology’

    NARCIS (Netherlands)

    van Heur, B.; Leydesdorff, L.; Wyatt, S.

    2012-01-01

    We examine the evidence for the claim of an ‘ontological turn’ in science and technology studies (STS). Despite an increase in references to ‘ontology’ in STS since 1989, we show that there has not so much been an ontological turn as multiple discussions deploying the language of ontology,

  3. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  4. WEB-BASED INSTRUCTIONAL ENVIRONMENTS: TOOLS AND TECHNIQUES FOR EFFECTIVE SECOND LANGUAGE ACQUISITION

    Directory of Open Access Journals (Sweden)

    Esperanza Roman

    2002-06-01

    Full Text Available The potential of the Internet and especially the World Wide Web for the teaching and learning of foreign languages has grown spectacularly in the past five years. Nevertheless, designing and implementing sound materials for an online learning environment involves timeconsuming processes in which many instructors may be reluctant to participate. For this reason. Web-based course management systems (WCMSs have begun to flourish in the market, in an effoi-t to assist teachers to create learning environments in which students have the necessary means to interact effectively with their peers, their instructors, and the course material. This article reviews the nature of WCMSs, their advantages and disadvantages, and their potential for language learning by focusing on key issues that surround the design implementation, and assessment of Web-based language courses, and by explaining how to integrate WCMSs to increase students' exposure to authentic materials and language-learning related activities, and to motivate them to engage in ineaningful communication processes and collaborative activities.

  5. Effects of Locus of Control and Learner-Control on Web-Based Language Learning

    Science.gov (United States)

    Chang, Mei-Mei; Ho, Chiung-Mei

    2009-01-01

    The study explored the effects of students' locus of control and types of control over instruction on their self-efficacy and performance in a web-based language learning environment. A web-based interactive instructional program focusing on the comprehension of news articles for English language learners was developed in two versions: learner-…

  6. Knowledge Representation from Classification Schema to Semantic Web (I

    Directory of Open Access Journals (Sweden)

    Silvia-Adriana Tomescu

    2014-01-01

    Full Text Available In this essay we aim to investigate knowledge as approach of describing possible worlds through classification schema, taxonomies, ontologies and semantic web. We focus on the historical background and the methods of culture and civilization representation. In this regard, we studied the ancient concern to classify knowledge, from the biblical period when the Tree Metaphor concentrated the essence of knowledge, to the Francis Bacon classification and then Paul Otlet and we analysed the languages used in the scientific fields and then in the information science filed, emphasizing on the improvements of the ICT: hypertext and semantic web. We paid a special attention to the knowledge construction through math language and exchange standards. The reason of the approach comes from the logic and philosophic base of the knowledge representation that underline the idea that only properly structured scientific domains ensure the progress of the society.

  7. Medizinische Ontologien: das Ende des MeSH / Medical ontologies: the end of MeSH

    Directory of Open Access Journals (Sweden)

    Cazan, Constantin

    2006-12-01

    Full Text Available Since the beginning of information technology the complexicity of medical questions and medical information management is an important topic which challenges computer scientists.In the eighties of last century artificial intelligence went awry. Though some core ideas of AI have brought up fruitful results. After all congruent development in a number of different scientific disciplines and the exponential development in computer hardware could meet the high requirements in medical information search. In 2000 Tim Berners-Lee's programmatic request for a Semantic Web gained the ontology topic broader attention. Already 20 years ago NLM started to develop the Unified Medical Language System (UMLS. So in medicine (PubMed ontology integrated into a semantic net is in operation. Hence it is high time for medical librarians and documentalists to get into this topic although it is covered by a smoke screen of terminology from IT. Ontologies can be understood as tools for classification. So essential contributions from library and documentation science could be expected. This paper should open an entrance to the topic. It will explain fundamental elements of UMLS and includes an annotated list of literature for further studies.

  8. Language Practice with Multimedia Supported Web-Based Grammar Revision Material

    Science.gov (United States)

    Baturay, Meltem Huri; Daloglu, Aysegul; Yildirim, Soner

    2010-01-01

    The aim of this study was to investigate the perceptions of elementary-level English language learners towards web-based, multimedia-annotated grammar learning. WEBGRAM, a system designed to provide supplementary web-based grammar revision material, uses audio-visual aids to enrich the contextual presentation of grammar and allows learners to…

  9. A Bayesian Network Approach to Ontology Mapping

    National Research Council Canada - National Science Library

    Pan, Rong; Ding, Zhongli; Yu, Yang; Peng, Yun

    2005-01-01

    This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web...

  10. XML, Ontologies, and Their Clinical Applications.

    Science.gov (United States)

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  11. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  12. Integrating Mathematics, Science, and Language Arts Instruction Using the World Wide Web.

    Science.gov (United States)

    Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron

    1998-01-01

    Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)

  13. Initial Implementation of a comparative Data Analysis Ontology

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    2009-01-01

    Full Text Available Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: “Operational Taxonomic Units” (OTUs, representing the entities to be compared; “character-state data” representing the observations compared among OTUs; “phylogenetic tree”, representing the historical path of evolution among the entities; and “transitions”, the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL, we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO. CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc. that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  14. Initial Implementation of a Comparative Data Analysis Ontology

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    2009-07-01

    Full Text Available Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: “Operational Taxonomic Units” (OTUs, representing the entities to be compared; “character-state data” representing the observations compared among OTUs; “phylogenetic tree”, representing the historical path of evolution among the entities; and “transitions”, the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL, we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO. CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc. that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  15. Ontology: ambiguity and accuracy

    Directory of Open Access Journals (Sweden)

    Marcelo Schiessl

    2012-08-01

    Full Text Available Ambiguity is a major obstacle to information retrieval. It is source of several researches in Information Science. Ontologies have been studied in order to solve problems related to ambiguities. Paradoxically, “ontology” term is also ambiguous and it is understood according to the use by the community. Philosophy and Computer Science seems to have the most accentuated difference related to the term sense. The former holds undisputed tradition and authority. The latter, in despite of being quite recent, holds an informal sense, but pragmatic. Information Science acts ranging from philosophical to computational approaches so as to get organized collections based on balance between users’ necessities and available information. The semantic web requires informational cycle automation and demands studies related to ontologies. Consequently, revisiting relevant approaches for the study of ontologies plays a relevant role as a way to provide useful ideas to researchers maintaining philosophical rigor, and convenience provided by computers.

  16. Connecting geoscience systems and data using Linked Open Data in the Web of Data

    Science.gov (United States)

    Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna

    2014-05-01

    specific and cross-domain vocabularies in the sense of terminological ontologies are the foundation for a virtually unified data retrieval and access in IUGONET, ESPAS and GFZ ISDC data management systems. SPARQL endpoints realized either by originally RDF databases, e.g. Virtuoso or by virtual SPARQL endpoints, e.g. D2R services enable an only upon Web standard-based mash-up of domain-specific systems and data, such as in this case the space weather and geomagnetic domain but also cross-domain connection to data and vocabularies, e.g. related to NASA's VxOs, particularly VWO or NASA's PDS data system within LOD. LOD - Linked Open Data RDF - Resource Description Framework RDFS - RDF Schema OWL - Ontology Web Language SPARQL - SPARQL Protocol and RDF Query Language FOAF - Friends of a Friend ontology ESPAS - Near Earth Space Data Infrastructure for e-Science (Project) IUGONET - Inter-university Upper Atmosphere Global Observation Network (Project) GFZ ISDC - German Research Centre for Geosciences Information System and Data Center XML - Extensible Mark-up Language D2R - (Relational) Database to RDF (Transformation) XSLT - Extensible Stylesheet Language Transformation Virtuoso - OpenLink Virtuoso Universal Server (including RDF data management) NASA - National Aeronautics and Space Administration VOx - Virtual Observatories VWO - Virtual Wave Observatory PDS - Planetary Data System

  17. Teaching a Foreign Language to Deaf People via Vodcasting & Web 2.0 Tools

    Science.gov (United States)

    Drigas, Athanasios; Vrettaros, John; Tagoulis, Alexandors; Kouremenos, Dimitris

    This paper presents the design and development of an e-learning course in teaching deaf people in a foreign language, whose first language is the sign language. The course is based in e-material, vodcasting and web 2.0 tools such as social networking and blog The course has been designed especially for deaf people and it is exploring the possibilities that e-learning material vodcasting and web 2.0 tools can offer to enhance the learning process and achieve more effective learning results.

  18. Didactical Ontologies

    Directory of Open Access Journals (Sweden)

    Steffen Mencke, Reiner Dumke

    2008-03-01

    Full Text Available Ontologies are a fundamental concept of theSemantic Web envisioned by Tim Berners-Lee [1]. Togetherwith explicit representation of the semantics of data formachine-accessibility such domain theories are the basis forintelligent next generation applications for the web andother areas of interest [2]. Their application for specialaspects within the domain of e-learning is often proposed tosupport the increasing complexity ([3], [4], [5], [6]. So theycan provide a better support for course generation orlearning scenario description [7]. By the modeling ofdidactics-related expertise and their provision for thecreators of courses many improvements like reuse, rapiddevelopment and of course increased learning performancebecome possible due to the separation from other aspects ofe-learning platforms as already proposed in [8].

  19. Buildings classification from airborne LiDAR point clouds through OBIA and ontology driven approach

    Science.gov (United States)

    Tomljenovic, Ivan; Belgiu, Mariana; Lampoltshammer, Thomas J.

    2013-04-01

    the consistency of the developed ontologies, and logical reasoning is performed to infer implicit relations between defined concepts. The ontology for the definition of building is specified using the Ontology Web Language (OWL). It is the most widely used ontology language that is based on Description Logics (DL). DL allows the description of internal properties of modelled concepts (roof typology, shape, area, height etc.) and relationships between objects (IS_A, MEMBER_OF/INSTANCE_OF). It captures terminological knowledge (TBox) as well as assertional knowledge (ABox) - that represents facts about concept instances, i.e. the buildings in airborne LiDAR data. To assess the classification accuracy, ground truth data generated by visual interpretation and calculated classification results in terms of precision and recall are used. The advantages of this approach are: (i) flexibility, (ii) transferability, and (iii) extendibility - i.e. ontology can be extended with further concepts, data properties and object properties.

  20. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    Science.gov (United States)

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  1. Automating Ontological Annotation with WordNet

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  2. Modular Logic Programming for Web Data, Inheritance and Agents

    Science.gov (United States)

    Karali, Isambo

    The Semantic Web provides a framework and a set of technologies enabling an effective machine processable information. However, most of the problems that are addressed in the Semantic Web were tackled by the artificial intelligence community, in the past. Within this period, Logic Programming emerged as a complete framework ranging from a sound formal theory, based on Horn clauses, to a declarative description language and an operational behavior that can be executed. Logic programming and its extensions have been already used in various approaches in the Semantic Web or the traditional Web context. In this work, we investigate the use of Modular Logic Programming, i.e. Logic Programming extended with modules, to address issues of the Semantic Web ranging from the ontology layer to reasoning and agents. These techniques provide a uniform framework ranging from the data layer to the higher layers of logic, avoiding the problem of incompatibilities of technologies related with different Semantic Web layers. What is more is that it can operate directly on top of existent World Wide Web sources.

  3. The Ontology for Biomedical Investigations.

    Science.gov (United States)

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  4. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    Science.gov (United States)

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. WEB-BASED LANGUAGE CLUB AFFECTING EFL LEARNERS’ PROFICIENCY: A CASE OF IRANIAN LEARNERS

    Directory of Open Access Journals (Sweden)

    Hamid Ashraf

    2014-06-01

    Full Text Available Language clubs have been reported to be effective in learning languages, increasing motivation and independence (Gao, 2009. The present study was an attempt to investigate the effect of a web-based language club on the language proficiency of Iranian EFL learners. A number of pre-intermediate learners form two universities (118 were selected among 154 through a test of proficiency (TOEFL PBL, then they were put into experimental and control groups. The participants in experimental group got on line and acted as a member of a virtual language club for a period of 6 months. They got involved with activities like emailing, chatting, and weblogging. Data were collected through TOEFL PBL. The analyzed data from the test of proficiency indicated the outperformance of those in experimental group. Consequently, it might be proposed that web-based language clubs can make language learning easier and more efficient.

  6. An ontological knowledge framework for adaptive medical workflow.

    Science.gov (United States)

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  7. Semantic Web based Self-management for a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2008-01-01

    Self-management is one of the challenges for realizing ambient intelligence in pervasive computing. In this paper,we propose and present a semantic Web based self-management approach for a pervasive service middleware where dynamic context information is encoded in a set of self-management context...... ontologies. The proposed approach is justified from the characteristics of pervasive computing and the open world assumption and reasoning potentials of semantic Web and its rule language. To enable real-time self-management, application level and network level state reporting is employed in our approach....... State changes are triggering execution of self-management rules for adaption, monitoring, diagnosis, and so on. Evaluations of self-diagnosis in terms of extensibility, performance,and scalability show that the semantic Web based self-management approach is effective to achieve the self-diagnosis goals...

  8. PAV ontology: provenance, authoring and versioning.

    Science.gov (United States)

    Ciccarese, Paolo; Soiland-Reyes, Stian; Belhajjame, Khalid; Gray, Alasdair Jg; Goble, Carole; Clark, Tim

    2013-11-22

    Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as Dublin Core Terms (DC Terms) and the W3C Provenance Ontology (PROV-O) are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. In particular, to track authoring and versioning information of web resources, PROV-O provides a basic methodology but not any specific classes and properties for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. We present the Provenance, Authoring and Versioning ontology (PAV, namespace http://purl.org/pav/): a lightweight ontology for capturing "just enough" descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the W3C PROV-O ontology to support broader interoperability. The initial design of the PAV ontology was driven by requirements from the AlzSWAN project with further requirements incorporated later from other projects detailed in this paper. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. We analyze and compare PAV with related

  9. Query Processing in Ontology-Based Peer-to-Peer Systems

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Harmelen, Frank Van; Giunchiglia, Fausto

    2005-01-01

    The unstructured, heterogeneous and dynamic nature of the Web poses a new challenge to query-answering over multiple data sources. The so-called Semantic Web aims at providing more and semantically richer structures in terms of ontologies and meta-data. A problem that remains is the combined use of

  10. Application of Pareto optimization method for ontology matching in nuclear reactor domain

    International Nuclear Information System (INIS)

    Meenachi, N. Madurai; Baba, M. Sai

    2017-01-01

    This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.

  11. Application of Pareto optimization method for ontology matching in nuclear reactor domain

    Energy Technology Data Exchange (ETDEWEB)

    Meenachi, N. Madurai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Planning and Human Resource Management Div.; Baba, M. Sai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Resources Management Group

    2017-12-15

    This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.

  12. A Simulation Model Articulation of the REA Ontology

    Science.gov (United States)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  13. Towards the Semantic Web

    NARCIS (Netherlands)

    Davies, John; Fensel, Dieter; Harmelen, Frank Van

    2003-01-01

    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies

  14. CRAVE: a database, middleware and visualization system for phenotype ontologies.

    Science.gov (United States)

    Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M

    2005-04-01

    A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.

  15. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...

  16. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    Science.gov (United States)

    Ma, X.; Zheng, J. G.; Goldstein, J.; Duggan, B.; Xu, J.; Du, C.; Akkiraju, A.; Aulenbach, S.; Tilmes, C.; Fox, P. A.

    2013-12-01

    The periodical National Climate Assessment (NCA) of the US Global Change Research Program (USGCRP) [1] produces reports about findings of global climate change and the impacts of climate change on the United States. Those findings are of great public and academic concerns and are used in policy and management decisions, which make the provenance information of findings in those reports especially important. The USGCRP is developing a Global Change Information System (GCIS), in which the NCA reports and associated provenance information are the primary records. We were modeling and developing Semantic Web applications for the GCIS. By applying a use case-driven iterative methodology [2], we developed an ontology [3] to represent the content structure of a report and the associated provenance information. We also mapped the classes and properties in our ontology into the W3C PROV-O ontology [4] to realize the formal presentation of provenance. We successfully implemented the ontology in several pilot systems for a recent National Climate Assessment report (i.e., the NCA3). They provide users the functionalities to browse and search provenance information with topics of interest. Provenance information of the NCA3 has been made structured and interoperable by applying the developed ontology. Besides the pilot systems we developed, other tools and services are also able to interact with the data in the context of the 'Web of data' and thus create added values. Our research shows that the use case-driven iterative method bridges the gap between Semantic Web researchers and earth and environmental scientists and is able to be deployed rapidly for developing Semantic Web applications. Our work also provides first-hand experience for re-using the W3C PROV-O ontology in the field of earth and environmental sciences, as the PROV-O ontology is recently ratified (on 04/30/2013) by the W3C as a recommendation and relevant applications are still rare. [1] http

  17. Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling

    Science.gov (United States)

    Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.

    The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.

  18. Semantic web for integrated network analysis in biomedicine.

    Science.gov (United States)

    Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y

    2009-03-01

    The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.

  19. Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.

    Science.gov (United States)

    Burek, Patryk; Loebe, Frank; Herre, Heinrich

    2017-10-04

    Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.

  20. OLS Dialog: An open-source front end to the Ontology Lookup Service

    Directory of Open Access Journals (Sweden)

    Eidhammer Ingvar

    2010-01-01

    Full Text Available Abstract Background With the growing amount of biomedical data available in public databases it has become increasingly important to annotate data in a consistent way in order to allow easy access to this rich source of information. Annotating the data using controlled vocabulary terms and ontologies makes it much easier to compare and analyze data from different sources. However, finding the correct controlled vocabulary terms can sometimes be a difficult task for the end user annotating these data. Results In order to facilitate the location of the correct term in the correct controlled vocabulary or ontology, the Ontology Lookup Service was created. However, using the Ontology Lookup Service as a web service is not always feasible, especially for researchers without bioinformatics support. We have therefore created a Java front end to the Ontology Lookup Service, called the OLS Dialog, which can be plugged into any application requiring the annotation of data using controlled vocabulary terms, making it possible to find and use controlled vocabulary terms without requiring any additional knowledge about web services or ontology formats. Conclusions As a user-friendly open source front end to the Ontology Lookup Service, the OLS Dialog makes it straightforward to include controlled vocabulary support in third-party tools, which ultimately makes the data even more valuable to the biomedical community.

  1. GOssTo: a stand-alone application and a web tool for calculating semantic similarities on the Gene Ontology.

    Science.gov (United States)

    Caniza, Horacio; Romero, Alfonso E; Heron, Samuel; Yang, Haixuan; Devoto, Alessandra; Frasca, Marco; Mesiti, Marco; Valentini, Giorgio; Paccanaro, Alberto

    2014-08-01

    We present GOssTo, the Gene Ontology semantic similarity Tool, a user-friendly software system for calculating semantic similarities between gene products according to the Gene Ontology. GOssTo is bundled with six semantic similarity measures, including both term- and graph-based measures, and has extension capabilities to allow the user to add new similarities. Importantly, for any measure, GOssTo can also calculate the Random Walk Contribution that has been shown to greatly improve the accuracy of similarity measures. GOssTo is very fast, easy to use, and it allows the calculation of similarities on a genomic scale in a few minutes on a regular desktop machine. alberto@cs.rhul.ac.uk GOssTo is available both as a stand-alone application running on GNU/Linux, Windows and MacOS from www.paccanarolab.org/gossto and as a web application from www.paccanarolab.org/gosstoweb. The stand-alone application features a simple and concise command line interface for easy integration into high-throughput data processing pipelines. © The Author 2014. Published by Oxford University Press.

  2. Supporting spatial data harmonization process with the use of ontologies and Semantic Web technologies

    Science.gov (United States)

    Strzelecki, M.; Iwaniak, A.; Łukowicz, J.; Kaczmarek, I.

    2013-10-01

    Nowadays, spatial information is not only used by professionals, but also by common citizens, who uses it for their daily activities. Open Data initiative states that data should be freely and unreservedly available for all users. It also applies to spatial data. As spatial data becomes widely available it is essential to publish it in form which guarantees the possibility of integrating it with other, heterogeneous data sources. Interoperability is the possibility to combine spatial data sets from different sources in a consistent way as well as providing access to it. Providing syntactic interoperability based on well-known data formats is relatively simple, unlike providing semantic interoperability, due to the multiple possible data interpretation. One of the issues connected with the problem of achieving interoperability is data harmonization. It is a process of providing access to spatial data in a representation that allows combining it with other harmonized data in a coherent way by using a common set of data product specification. Spatial data harmonization is performed by creating definition of reclassification and transformation rules (mapping schema) for source application schema. Creation of those rules is a very demanding task which requires wide domain knowledge and a detailed look into application schemas. The paper focuses on proposing methods for supporting data harmonization process, by automated or supervised creation of mapping schemas with the use of ontologies, ontology matching methods and Semantic Web technologies.

  3. Ontology-based, Tissue MicroArray oriented, image centered tissue bank

    Directory of Open Access Journals (Sweden)

    Viti Federica

    2008-04-01

    Full Text Available Abstract Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes.

  4. Developing an Ontology for Ocean Biogeochemistry Data

    Science.gov (United States)

    Chandler, C. L.; Allison, M. D.; Groman, R. C.; West, P.; Zednik, S.; Maffei, A. R.

    2010-12-01

    Semantic Web technologies offer great promise for enabling new and better scientific research. However, significant challenges must be met before the promise of the Semantic Web can be realized for a discipline as diverse as oceanography. Evolving expectations for open access to research data combined with the complexity of global ecosystem science research themes present a significant challenge, and one that is best met through an informatics approach. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is funded by the National Science Foundation Division of Ocean Sciences to work with ocean biogeochemistry researchers to improve access to data resulting from their respective programs. In an effort to improve data access, BCO-DMO staff members are collaborating with researchers from the Tetherless World Constellation (Rensselaer Polytechnic Institute) to develop an ontology that formally describes the concepts and relationships in the data managed by the BCO-DMO. The project required transforming a legacy system of human-readable, flat files of metadata to well-ordered controlled vocabularies to a fully developed ontology. To improve semantic interoperability, terms from the BCO-DMO controlled vocabularies are being mapped to controlled vocabulary terms adopted by other oceanographic data management organizations. While the entire process has proven to be difficult, time-consuming and labor-intensive, the work has been rewarding and is a necessary prerequisite for the eventual incorporation of Semantic Web tools. From the beginning of the project, development of the ontology has been guided by a use case based approach. The use cases were derived from data access related requests received from members of the research community served by the BCO-DMO. The resultant ontology satisfies the requirements of the use cases and reflects the information stored in the metadata database. The BCO-DMO metadata database currently contains information that

  5. Eliom: A core ML language for Tierless Web programming

    OpenAIRE

    Radanne , Gabriel; Vouillon , Jérôme; Balat , Vincent

    2016-01-01

    International audience; Eliom is a dialect of OCaml for Web programming in which server and client pieces of code can be mixed in the same file using syntactic annotations. This allows to build a whole application as a single distributed program, in which it is possible to define in a composable way reusable widgets with both server and client behaviors. Our language also enables simple and type-safe communication. Eliom matches the specificities of the Web by allowing the programmer to inter...

  6. War of ontology worlds: mathematics, computer code, or Esperanto?

    Science.gov (United States)

    Rzhetsky, Andrey; Evans, James A

    2011-09-01

    The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.

  7. An ontology for factors affecting tuberculosis treatment adherence behavior in sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Ogundele OA

    2016-04-01

    Full Text Available Olukunle Ayodeji Ogundele,1 Deshendran Moodley,1 Anban W Pillay,1 Christopher J Seebregts1,2 1UKZN/CSIR Meraka Centre for Artificial Intelligence Research and Health Architecture Laboratory, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, KwaZulu-Natal, 2Jembi Health Systems NPC, Cape Town, South Africa Purpose: Adherence behavior is a complex phenomenon influenced by diverse personal, cultural, and socioeconomic factors that may vary between communities in different regions. Understanding the factors that influence adherence behavior is essential in predicting which individuals and communities are at risk of nonadherence. This is necessary for supporting resource allocation and intervention planning in disease control programs. Currently, there is no known concrete and unambiguous computational representation of factors that influence tuberculosis (TB treatment adherence behavior that is useful for prediction. This study developed a computer-based conceptual model for capturing and structuring knowledge about the factors that influence TB treatment adherence behavior in sub-Saharan Africa (SSA.Methods: An extensive review of existing categorization systems in the literature was used to develop a conceptual model that captured scientific knowledge about TB adherence behavior in SSA. The model was formalized as an ontology using the web ontology language. The ontology was then evaluated for its comprehensiveness and applicability in building predictive models. Conclusion: The outcome of the study is a novel ontology-based approach for curating and structuring scientific knowledge of adherence behavior in patients with TB in SSA. The ontology takes an evidence-based approach by explicitly linking factors to published clinical studies. Factors are structured around five dimensions: factor type, type of effect, regional variation, cross-dependencies between factors, and treatment phase. The ontology is

  8. Metadata and Ontologies in Learning Resources Design

    Science.gov (United States)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  9. Product line based ontology reuse in context-aware e-business environment

    DEFF Research Database (Denmark)

    Zhang, Weishan; Kunz, Thomas

    2006-01-01

    Improving the reusability of ontology is recognized as increasingly important due to the prevalence of OWL research and applications. But there exists no convincing methodology and tool support in this direction yet. In this paper, we apply ideas from the research and practice with software product...... lines in order to explore this issue. The ontology is developed and managed according to the commonalities and variabilities underlying a specific problem domain. Meta-ontology is used in order to improve the reusability, evolve-ability and customizability of ontology. Another advantage is being able...... to generate needed ontology with the created meta-ontology implemented with XVCL (XML based Variant Configuration Language) technology. We demonstrate our product line based reuse approach with an example B2C application....

  10. ATTITUDES OF STUDENTS TOWARDS LEARNING OBJECTS IN WEB-BASED LANGUAGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ahmet BASAL

    2012-01-01

    Full Text Available Language education is important in the rapidly changing world. Every year much effort has spent on preparing teaching materials for language education. Since positive attitudes of learners towards a teaching material enhance the effectiveness of that material, it is important to determine the attitudes of learners towards the material used. Learning objects (LOs are a new type of material on which many studies have been conducted in recent years. The aim of this study is to determine the attitudes of students towards LOs in web-based language learning. To this end, the study was conducted in English I Course at the Department of Computer Programming in Kırıkkale University in 2010-2011 Fall Semester. Seventy LOs appropriate for six-week long lecture program were integrated into the Learning Management System (LMS of Kırıkkale University. The study group consisted of 38 students. After the six weeks long implementation period of the study, an attitude scale was administered to the students. The findings indicated that students in web based language education have positive attitudes towards LOs.

  11. BOWiki: an ontology-based wiki for annotation of data and integration of knowledge in biology

    Directory of Open Access Journals (Sweden)

    Gregorio Sergio E

    2009-05-01

    Full Text Available Abstract Motivation Ontology development and the annotation of biological data using ontologies are time-consuming exercises that currently require input from expert curators. Open, collaborative platforms for biological data annotation enable the wider scientific community to become involved in developing and maintaining such resources. However, this openness raises concerns regarding the quality and correctness of the information added to these knowledge bases. The combination of a collaborative web-based platform with logic-based approaches and Semantic Web technology can be used to address some of these challenges and concerns. Results We have developed the BOWiki, a web-based system that includes a biological core ontology. The core ontology provides background knowledge about biological types and relations. Against this background, an automated reasoner assesses the consistency of new information added to the knowledge base. The system provides a platform for research communities to integrate information and annotate data collaboratively. Availability The BOWiki and supplementary material is available at http://www.bowiki.net/. The source code is available under the GNU GPL from http://onto.eva.mpg.de/trac/BoWiki.

  12. Computer support for physiological cell modelling using an ontology on cell physiology.

    Science.gov (United States)

    Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda

    2006-01-01

    The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.

  13. A Formal Theory for Modular ERDF Ontologies

    Science.gov (United States)

    Analyti, Anastasia; Antoniou, Grigoris; Damásio, Carlos Viegas

    The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided.

  14. Ontology-centric integration and navigation of the dengue literature.

    Science.gov (United States)

    Rajapakse, Menaka; Kanagasabai, Rajaraman; Ang, Wee Tiong; Veeramani, Anitha; Schreiber, Mark J; Baker, Christopher J O

    2008-10-01

    Uninhibited access to the unstructured information distributed across the web and in scientific literature databases continues to be beyond the reach of scientists and health professionals. To address this challenge we have developed a literature driven, ontology-centric navigation infrastructure consisting of a content acquisition engine, a domain-specific ontology (in OWL-DL) and an ontology instantiation pipeline delivering sentences derived by domain-specific text mining. A visual query tool for reasoning over A-box instances in the populated ontology is presented and used to build conceptual queries that can be issued to the knowledgebase. We have deployed this generic infrastructure to facilitate data integration and knowledge sharing in the domain of dengue, which is one of the most prevalent viral diseases that continue to infect millions of people in the tropical and subtropical regions annually. Using our unique methodology we illustrate simplified search and discovery on dengue information derived from distributed resources and aggregated according to dengue ontology. Furthermore we apply data mining to the instantiated ontology to elucidate trends in the mentions of dengue serotypes in scientific abstracts since 1974.

  15. Paradoxes of Social Networking in a Structured Web 2.0 Language Learning Community

    Science.gov (United States)

    Loiseau, Mathieu; Zourou, Katerina

    2012-01-01

    This paper critically inquires into social networking as a set of mechanisms and associated practices developed in a structured Web 2.0 language learning community. This type of community can be roughly described as learning spaces featuring (more or less) structured language learning resources displaying at least some notions of language learning…

  16. Assessing the practice of biomedical ontology evaluation: Gaps and opportunities.

    Science.gov (United States)

    Amith, Muhammad; He, Zhe; Bian, Jiang; Lossio-Ventura, Juan Antonio; Tao, Cui

    2018-04-01

    With the proliferation of heterogeneous health care data in the last three decades, biomedical ontologies and controlled biomedical terminologies play a more and more important role in knowledge representation and management, data integration, natural language processing, as well as decision support for health information systems and biomedical research. Biomedical ontologies and controlled terminologies are intended to assure interoperability. Nevertheless, the quality of biomedical ontologies has hindered their applicability and subsequent adoption in real-world applications. Ontology evaluation is an integral part of ontology development and maintenance. In the biomedicine domain, ontology evaluation is often conducted by third parties as a quality assurance (or auditing) effort that focuses on identifying modeling errors and inconsistencies. In this work, we first organized four categorical schemes of ontology evaluation methods in the existing literature to create an integrated taxonomy. Further, to understand the ontology evaluation practice in the biomedicine domain, we reviewed a sample of 200 ontologies from the National Center for Biomedical Ontology (NCBO) BioPortal-the largest repository for biomedical ontologies-and observed that only 15 of these ontologies have documented evaluation in their corresponding inception papers. We then surveyed the recent quality assurance approaches for biomedical ontologies and their use. We also mapped these quality assurance approaches to the ontology evaluation criteria. It is our anticipation that ontology evaluation and quality assurance approaches will be more widely adopted in the development life cycle of biomedical ontologies. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Semantic heterogeneity: comparing new semantic web approaches with those of digital libraries

    OpenAIRE

    Krause, Jürgen

    2008-01-01

    To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. The points criticised by the semantic web and ontology approaches ar...

  18. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  19. Building ontologies with basic formal ontology

    CERN Document Server

    Arp, Robert; Spear, Andrew D.

    2015-01-01

    In the era of "big data," science is increasingly information driven, and the potential for computers to store, manage, and integrate massive amounts of data has given rise to such new disciplinary fields as biomedical informatics. Applied ontology offers a strategy for the organization of scientific information in computer-tractable form, drawing on concepts not only from computer and information science but also from linguistics, logic, and philosophy. This book provides an introduction to the field of applied ontology that is of particular relevance to biomedicine, covering theoretical components of ontologies, best practices for ontology design, and examples of biomedical ontologies in use. After defining an ontology as a representation of the types of entities in a given domain, the book distinguishes between different kinds of ontologies and taxonomies, and shows how applied ontology draws on more traditional ideas from metaphysics. It presents the core features of the Basic Formal Ontology (BFO), now u...

  20. Towards an Ontology for the Global Geodynamics Project: Automated Extraction of Resource Descriptions from an XML-Based Data Model

    Science.gov (United States)

    Lumb, L. I.; Aldridge, K. D.

    2005-12-01

    Using the Earth Science Markup Language (ESML), an XML-based data model for the Global Geodynamics Project (GGP) was recently introduced [Lumb & Aldridge, Proc. HPCS 2005, Kotsireas & Stacey, eds., IEEE, 2005, 216-222]. This data model possesses several key attributes -i.e., it: makes use of XML schema; supports semi-structured ASCII format files; includes Earth Science affinities; and is on track for compliance with emerging Grid computing standards (e.g., the Global Grid Forum's Data Format Description Language, DFDL). Favorable attributes notwithstanding, metadata (i.e., data about data) was identified [Lumb & Aldridge, 2005] as a key challenge for progress in enabling the GGP for Grid computing. Even in projects of small-to-medium scale like the GGP, the manual introduction of metadata has the potential to be the rate-determining metric for progress. Fortunately, an automated approach for metadata introduction has recently emerged. Based on Gleaning Resource Descriptions from Dialects of Languages (GRDDL, http://www.w3.org/2004/01/rdxh/spec), this bottom-up approach allows for the extraction of Resource Description Format (RDF) representations from the XML-based data model (i.e., the ESML representation of GGP data) subject to rules of transformation articulated via eXtensible Stylesheet Language Transformations (XSLT). In addition to introducing relationships into the GGP data model, and thereby addressing the metadata requirement, the syntax and semantics of RDF comprise a requisite for a GGP ontology - i.e., ``the common words and concepts (the meaning) used to describe and represent an area of knowledge'' [Daconta et al., The Semantic Web, Wiley, 2003]. After briefly reviewing the XML-based model for the GGP, attention focuses on the automated extraction of an RDF representation via GRDDL with XSLT-delineated templates. This bottom-up approach, in tandem with a top-down approach based on the Protege integrated development environment for ontologies (http

  1. Toward a formal ontology for narrative

    Directory of Open Access Journals (Sweden)

    Ciotti, Fabio

    2016-03-01

    Full Text Available In this paper the rationale and the first draft of a formal ontology for modeling narrative texts are presented. Building on the semiotic and structuralist narratology, and on the work carried out in the late 1980s by Giuseppe Gigliozzi in Italy, the focus of my research are the concepts of character and of narrative world/space. This formal model is expressed in the OWL 2 ontology language. The main reason to adopt a formal modeling approach is that I consider the purely probabilistic-quantitative methods (now widespread in digital literary studies inadequate. An ontology, on one hand provides a tool for the analysis of strictly literary texts. On the other hand (though beyond the scope of the present work, its formalization can also represent a significant contribution towards grounding the application of storytelling methods outside of scholarly contexts.

  2. Structure-based classification and ontology in chemistry

    Directory of Open Access Journals (Sweden)

    Hastings Janna

    2012-04-01

    Full Text Available Abstract Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures, while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational

  3. A Collaborative System Software Solution for Modeling Business Flows Based on Automated Semantic Web Service Composition

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2009-01-01

    Full Text Available Nowadays, business interoperability is one of the key factors for assuring competitive advantage for the participant business partners. In order to implement business cooperation, scalable, distributed and portable collaborative systems have to be implemented. This article presents some of the mostly used technologies in this field. Furthermore, it presents a software application architecture based on Business Process Modeling Notation standard and automated semantic web service coupling for modeling business flow in a collaborative manner. The main business processes will be represented in a single, hierarchic flow diagram. Each element of the diagram will represent calls to semantic web services. The business logic (the business rules and constraints will be structured with the help of OWL (Ontology Web Language. Moreover, OWL will also be used to create the semantic web service specifications.

  4. Ontological semantics in modified categorial grammar

    DEFF Research Database (Denmark)

    Szymczak, Bartlomiej Antoni

    2009-01-01

    Categorial Grammar is a well established tool for describing natural language semantics. In the current paper we discuss some of its drawbacks and how it could be extended to overcome them. We use the extended version for deriving ontological semantics from text. A proof-of-concept implementation...

  5. The Use of Web Questionnaires in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Wilson, Rosemary; Dewaele, Jean-Marc

    2010-01-01

    The present article focuses on data collection through web questionnaires, as opposed to the traditional pen-and-paper method for research in second language acquisition and bilingualism. It is argued that web questionnaires, which have been used quite widely in psychology, have the advantage of reaching out to a larger and more diverse pool of…

  6. Intelligence Artificielle Distribuée Et Gestion Des Connaissances : Ontologies Et Systèmes Multi-Agents Pour Un Web Sémantique Organisationnel

    OpenAIRE

    Gandon , Fabien

    2002-01-01

    This work concerns multi-agents systems for the management of a corporate semantic web based on an ontology. It was carried out in the context of the European project CoMMA focusing on two application scenarios: support technology monitoring activities and assist the integration of a new employee to the organisation. Three aspects were essentially developed in this work: the design of a multi-agents architecture supporting both scenarios, and the organisational top-down approach followed to i...

  7. Use of the CIM Ontology

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Scott; Britton, Jay; Devos, Arnold N.; Widergren, Steven E.

    2006-02-08

    There are many uses for the Common Information Model (CIM), an ontology that is being standardized through Technical Committee 57 of the International Electrotechnical Commission (IEC TC57). The most common uses to date have included application modeling, information exchanges, information management and systems integration. As one should expect, there are many issues that become apparent when the CIM ontology is applied to any one use. Some of these issues are shortcomings within the current draft of the CIM, and others are a consequence of the different ways in which the CIM can be applied using different technologies. As the CIM ontology will and should evolve, there are several dangers that need to be recognized. One is overall consistency and impact upon applications when extending the CIM for a specific need. Another is that a tight coupling of the CIM to specific technologies could limit the value of the CIM in the longer term as an ontology, which becomes a larger issue over time as new technologies emerge. The integration of systems is one specific area of interest for application of the CIM ontology. This is an area dominated by the use of XML for the definition of messages. While this is certainly true when using Enterprise Application Integration (EAI) products, it is even more true with the movement towards the use of Web Services (WS), Service-Oriented Architectures (SOA) and Enterprise Service Buses (ESB) for integration. This general IT industry trend is consistent with trends seen within the IEC TC57 scope of power system management and associated information exchange. The challenge for TC57 is how to best leverage the CIM ontology using the various XML technologies and standards for integration. This paper will provide examples of how the CIM ontology is used and describe some specific issues that should be addressed within the CIM in order to increase its usefulness as an ontology. It will also describe some of the issues and challenges that will

  8. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  9. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario André s; Valencia-Garcí a, Rafael; Rodriguez-Garcia, Miguel Angel; Colomo-Palacios, Ricardo; Alor-Herná ndez, Giner

    2016-01-01

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question's structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  10. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario Andrés

    2016-01-11

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question\\'s structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  11. Conceptualizing the e-Learning Assessment Domain using an Ontology Network

    Directory of Open Access Journals (Sweden)

    Lucía Romero

    2012-09-01

    Full Text Available During the last year, approaches that use ontologies, the backbone of the Semantic Web technologies, for different purposes in the assessment domain of e-Learning have emerged. One of these purposes is the use of ontologies as a mean of providing a structure to guide the automated design of assessments. The most of the approaches that deal with this problem have proposed individual ontologies that model only a part of the assessment domain. The main contribution of this paper is an ontology network, called AONet, that conceptualizes the e-assessment domain with the aim of supporting the semi-automatic generation of it. The main advantage of this network is that it is enriched with rules for considering not only technical aspects of an assessment but also pedagogic

  12. Ontology matters: a commentary on contribution to cultural historical activity

    Science.gov (United States)

    Martin, Jenny

    2017-10-01

    This commentary promotes discussion on the imaginary provided by Sanaz Farhangi in her article entitled, Contribution to activity: a lens for understanding students' potential and agency in physics education. The commentary is concerned with aligning ontological assumptions in research accounts of learning and development with transformative aims. A broad definition of ontology as the theory of existence is preferred. Sociocultural approaches share relational ontology as a common foundation. I agree with scholars elaborating Vygotsky's Transformative Activist Stance that a relational ontology does not imply activism. However, I argue that relational ontology provides a necessary and sufficient theoretical grounding for intentional transformation. I draw upon positioning theory to elaborate the moral aspects of language use and to illustrate that a theory of being as relational already eliminates the transcendental position. I draw on Farhangi's article to further the discussion on the necessity and sufficiency of relational ontology and associated grammars in accounting for activism.

  13. Adaptive Semantic and Social Web-based learning and assessment environment for the STEM

    Science.gov (United States)

    Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar

    2014-05-01

    We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the

  14. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    Science.gov (United States)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  15. Designing learning management system interoperability in semantic web

    Science.gov (United States)

    Anistyasari, Y.; Sarno, R.; Rochmawati, N.

    2018-01-01

    The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.

  16. Semantic Web repositories for genomics data using the eXframe platform.

    Science.gov (United States)

    Merrill, Emily; Corlosquet, Stéphane; Ciccarese, Paolo; Clark, Tim; Das, Sudeshna

    2014-01-01

    With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.

  17. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  18. Semi Automatic Ontology Instantiation in the domain of Risk Management

    Science.gov (United States)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  19. Annotating breast cancer microarray samples using ontologies

    Science.gov (United States)

    Liu, Hongfang; Li, Xin; Yoon, Victoria; Clarke, Robert

    2008-01-01

    As the most common cancer among women, breast cancer results from the accumulation of mutations in essential genes. Recent advance in high-throughput gene expression microarray technology has inspired researchers to use the technology to assist breast cancer diagnosis, prognosis, and treatment prediction. However, the high dimensionality of microarray experiments and public access of data from many experiments have caused inconsistencies which initiated the development of controlled terminologies and ontologies for annotating microarray experiments, such as the standard microarray Gene Expression Data (MGED) ontology (MO). In this paper, we developed BCM-CO, an ontology tailored specifically for indexing clinical annotations of breast cancer microarray samples from the NCI Thesaurus. Our research showed that the coverage of NCI Thesaurus is very limited with respect to i) terms used by researchers to describe breast cancer histology (covering 22 out of 48 histology terms); ii) breast cancer cell lines (covering one out of 12 cell lines); and iii) classes corresponding to the breast cancer grading and staging. By incorporating a wider range of those terms into BCM-CO, we were able to indexed breast cancer microarray samples from GEO using BCM-CO and MGED ontology and developed a prototype system with web interface that allows the retrieval of microarray data based on the ontology annotations. PMID:18999108

  20. The Semantics of Web Services: An Examination in GIScience Applications

    Directory of Open Access Journals (Sweden)

    Xuan Shi

    2013-09-01

    Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.

  1. GFVO: the Genomic Feature and Variation Ontology

    KAUST Repository

    Baran, Joachim

    2015-05-05

    Falling costs in genomic laboratory experiments have led to a steady increase of genomic feature and variation data. Multiple genomic data formats exist for sharing these data, and whilst they are similar, they are addressing slightly different data viewpoints and are consequently not fully compatible with each other. The fragmentation of data format specifications makes it hard to integrate and interpret data for further analysis with information from multiple data providers. As a solution, a new ontology is presented here for annotating and representing genomic feature and variation dataset contents. The Genomic Feature and Variation Ontology (GFVO) specifically addresses genomic data as it is regularly shared using the GFF3 (incl. FASTA), GTF, GVF and VCF file formats. GFVO simplifies data integration and enables linking of genomic annotations across datasets through common semantics of genomic types and relations. Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology’s GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use.

  2. A Proposition Of Knowledge Management Methodology For The Purpose Of Reasoning With The Use Of An Upper-Ontology

    Directory of Open Access Journals (Sweden)

    Kamil Szymański

    2007-01-01

    Full Text Available This article describes a proposition of knowledge organization for the purpose of reasoningusing an upper-ontology. It presents a model of integrated ontologies architecture whichconsists of a domain ontologies layer with instances, a shared upper-ontology layer withadditional rules and a layer of ontologies mapping concrete domain ontologies with the upperontology.Thanks to the upper-ontology, new facts were concluded from domain ontologiesduring the reasoning process. A practical realization proposition is given as well. It is basedon some popular SemanticWeb technologies and tools, such as OWL, SWRL, nRQL, Prot´eg´eand Racer.

  3. Aplicación de visualización de una ontología para el dominio del análisis del semen humano Application to visualize an ontology for the human semen analysis domain

    Directory of Open Access Journals (Sweden)

    Roberto Casañas

    2007-06-01

    Full Text Available En este trabajo se presenta el diseño e implementación de una ontología para el dominio del análisis del semen humano, cuyo objetivo es representar, organizar, formalizar y estandarizar el conocimiento del dominio, para que éste pueda ser compartido y reutilizado por distintos grupos de personas y aplicaciones de software. Para visualizar la ontología se desarrolló una aplicación basada en una arquitectura cliente/servidor para ambientes Web, la cual está constituida por un módulo de Administración y otro de Acceso Público. A través del primero se mantiene el sitio Web de la ontología, mientras que el segundo permite a los usuarios acceder al conocimiento almacenado y a un conjunto de recursos tales como imágenes, videos, artículos relativos al dominio, manuales y protocolos de laboratorio. La arquitectura propuesta facilita la observación y recuperación de las complejas estructuras de conocimiento, así como la navegación y administración de la información representada en la ontología. El enfoque utilizado en el diseño de los mecanismos de recuperación de información está dirigido tanto a usuarios poco familiarizados con el vocabulario del dominio, como a aquellos que ya lo conocen. Esta funcionalidad es de especial interés dado lo heterogénea que resulta la audiencia a la que está dirigida la ontología, como son profesionales y estudiantes de las ciencias de la salud, entre otros. La metodología Methontology fue seleccionada para desarrollar la ontología y se utilizó el editor Protégé para su implementación.The following work presents the design and implementation of an ontology for human semen analysis whose objective is to present, organize, formalize and standardize the domain knowledge, in order to be shared and reused by different groups of people and software applications. To visualize this ontology, a Web application based on a client/server architecture was developed, which is constituted by an

  4. Ontology evolution in physics

    OpenAIRE

    Chan, Michael

    2013-01-01

    With the advent of reasoning problems in dynamic environments, there is an increasing need for automated reasoning systems to automatically adapt to unexpected changes in representations. In particular, the automation of the evolution of their ontologies needs to be enhanced without substantially sacrificing expressivity in the underlying representation. Revision of beliefs is not enough, as adding to or removing from beliefs does not change the underlying formal language. Gene...

  5. A Lexical-Ontological Resource for Consumer Heathcare

    Science.gov (United States)

    Cardillo, Elena

    In Consumer Healthcare Informatics it is still difficult for laypersons to understand and act on health information, due to the persistent communication gap between specialized medical terminology and that used by healthcare consumers. Furthermore, existing clinically-oriented terminologies cannot provide sufficient support when integrated into consumer-oriented applications, so there is a need to create consumer-friendly terminologies reflecting the different ways healthcare consumers express and think about health topics. Following this direction, this work suggests a way to support the design of an ontology-based system that mitigates this gap, using knowledge engineering and Semantic Web technologies. The system is based on the development of a consumer-oriented medical terminology which will be integrated with other existing domain ontologies/terminologies into a medical ontology repository. This will support consumer-oriented healthcare systems by providing many knowledge services to help users in accessing and managing their healthcare data.

  6. Toward the Use of an Upper Ontology for U.S. Government and U.S. Military Domains: An Evaluation

    National Research Council Canada - National Science Library

    Semy, Salim K; Pulvermacher, Mary K; Obrst, Leo J

    2004-01-01

    ...) of data and ultimately of applications. Key to the vision of a Semantic Web is the ability to capture data and application semantics in ontologies and map these ontologies together via related concepts...

  7. Knowledge Portals: Ontologies at Work

    OpenAIRE

    Staab, Steffen; Maedche, Alexander

    2001-01-01

    Knowledge portals provide views onto domain-specific information on the World Wide Web, thus helping their users find relevant, domain-specific information. The construction of intelligent access and the contribution of information to knowledge portals, however, remained an ad hoc task, requiring extensive manual editing and maintenance by the knowledge portal providers. To diminish these efforts, we use ontologies as a conceptual backbone for providing, accessing, and structuring information...

  8. Learning a Language with Web 2.0: Exploring the Use of Social Networking Features of Foreign Language Learning Websites

    Science.gov (United States)

    Stevenson, Megan P.; Liu, Min

    2010-01-01

    This paper presents the results of an online survey and a usability test performed on three foreign language learning websites that use Web 2.0 technology. The online survey was conducted to gain an understanding of how current users of language learning websites use them for learning and social purposes. The usability test was conducted to gain…

  9. Moroccan higher education students’ and teachers’ perceptions towards using Web 2.0 technologies in language learning and teaching

    Directory of Open Access Journals (Sweden)

    Rdouan Faizi

    2018-03-01

    Full Text Available The objective of this paper is to examine Moroccan higher education students’ and teachers’ perceptions and attitudes towards using Web 2.0 technologies in language learning and teaching. The results of the study revealed that all the informants were immersed in using these Internet-based applications for personal and educational purposes. Nevertheless, while language learners reported to make beneficial uses of these online platforms as language learning tools, the great majority of the interviewed faculty members did not really benefit from these platforms. Although language teachers acknowledged that Web 2.0 technologies had a positive impact on language teaching and learning, most of them were still reluctant to incorporate these tools in educational practice. The findings demonstrated that most teachers’ use of these applications was limited to sending or transferring web links and learning materials produced by other Internet users. Rather than making effective use of Web 2.0 technologies and applications as teaching facilities, most teachers used them only as a means of communication.

  10. Ontology for cell-based geographic information

    Science.gov (United States)

    Zheng, Bin; Huang, Lina; Lu, Xinhai

    2009-10-01

    Inter-operability is a key notion in geographic information science (GIS) for the sharing of geographic information (GI). That requires a seamless translation among different information sources. Ontology is enrolled in GI discovery to settle the semantic conflicts for its natural language appearance and logical hierarchy structure, which are considered to be able to provide better context for both human understanding and machine cognition in describing the location and relationships in the geographic world. However, for the current, most studies on field ontology are deduced from philosophical theme and not applicable for the raster expression in GIS-which is a kind of field-like phenomenon but does not physically coincide to the general concept of philosophical field (mostly comes from the physics concepts). That's why we specifically discuss the cell-based GI ontology in this paper. The discussion starts at the investigation of the physical characteristics of cell-based raster GI. Then, a unified cell-based GI ontology framework for the recognition of the raster objects is introduced, from which a conceptual interface for the connection of the human epistemology and the computer world so called "endurant-occurrant window" is developed for the better raster GI discovery and sharing.

  11. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    Science.gov (United States)

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the Open

  12. Ontology Maintenance using Textual Analysis

    Directory of Open Access Journals (Sweden)

    Yassine Gargouri

    2003-10-01

    Full Text Available Ontologies are continuously confronted to evolution problem. Due to the complexity of the changes to be made, a maintenance process, at least a semi-automatic one, is more and more necessary to facilitate this task and to ensure its reliability. In this paper, we propose a maintenance ontology model for a domain, whose originality is to be language independent and based on a sequence of text processing in order to extract highly related terms from corpus. Initially, we deploy the document classification technique using GRAMEXCO to generate classes of texts segments having a similar information type and identify their shared lexicon, agreed as highly related to a unique topic. This technique allows a first general and robust exploration of the corpus. Further, we apply the Latent Semantic Indexing method to extract from this shared lexicon, the most associated terms that has to be seriously considered by an expert to eventually confirm their relevance and thus updating the current ontology. Finally, we show how the complementarity between these two techniques, based on cognitive foundation, constitutes a powerful refinement process.

  13. ROMIE, une approche d'alignement d'ontologies à base d'instances

    OpenAIRE

    Elbyed , Abdeltif

    2009-01-01

    System interoperability is an important issue, widely recognized in information technology intensive organizations and in the research community of information systems. The wide adoption of the World Wide Web to access and distribute information further stresses the need for system interoperability. Initiative solutions like the Semantic Web facilitate the localization and the integration of the data in a more intelligent way via the use of ontologies. The Semantic Web offers a compelling vis...

  14. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  15. Regular paths in SparQL: querying the NCI Thesaurus.

    Science.gov (United States)

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-11-06

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  16. Fish Ontology framework for taxonomy-based fish recognition

    Science.gov (United States)

    Ali, Najib M.; Khan, Haris A.; Then, Amy Y-Hui; Ving Ching, Chong; Gaur, Manas

    2017-01-01

    Life science ontologies play an important role in Semantic Web. Given the diversity in fish species and the associated wealth of information, it is imperative to develop an ontology capable of linking and integrating this information in an automated fashion. As such, we introduce the Fish Ontology (FO), an automated classification architecture of existing fish taxa which provides taxonomic information on unknown fish based on metadata restrictions. It is designed to support knowledge discovery, provide semantic annotation of fish and fisheries resources, data integration, and information retrieval. Automated classification for unknown specimens is a unique feature that currently does not appear to exist in other known ontologies. Examples of automated classification for major groups of fish are demonstrated, showing the inferred information by introducing several restrictions at the species or specimen level. The current version of FO has 1,830 classes, includes widely used fisheries terminology, and models major aspects of fish taxonomy, grouping, and character. With more than 30,000 known fish species globally, the FO will be an indispensable tool for fish scientists and other interested users. PMID:28929028

  17. Fish Ontology framework for taxonomy-based fish recognition

    Directory of Open Access Journals (Sweden)

    Najib M. Ali

    2017-09-01

    Full Text Available Life science ontologies play an important role in Semantic Web. Given the diversity in fish species and the associated wealth of information, it is imperative to develop an ontology capable of linking and integrating this information in an automated fashion. As such, we introduce the Fish Ontology (FO, an automated classification architecture of existing fish taxa which provides taxonomic information on unknown fish based on metadata restrictions. It is designed to support knowledge discovery, provide semantic annotation of fish and fisheries resources, data integration, and information retrieval. Automated classification for unknown specimens is a unique feature that currently does not appear to exist in other known ontologies. Examples of automated classification for major groups of fish are demonstrated, showing the inferred information by introducing several restrictions at the species or specimen level. The current version of FO has 1,830 classes, includes widely used fisheries terminology, and models major aspects of fish taxonomy, grouping, and character. With more than 30,000 known fish species globally, the FO will be an indispensable tool for fish scientists and other interested users.

  18. Sign Language Translation in State Administration in Germany: Barrier Free Web Accessibility

    OpenAIRE

    Lišková, Kateřina

    2014-01-01

    The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...

  19. Listening Strategy Use and Influential Factors in Web-Based Computer Assisted Language Learning

    Science.gov (United States)

    Chen, L.; Zhang, R.; Liu, C.

    2014-01-01

    This study investigates second and foreign language (L2) learners' listening strategy use and factors that influence their strategy use in a Web-based computer assisted language learning (CALL) system. A strategy inventory, a factor questionnaire and a standardized listening test were used to collect data from a group of 82 Chinese students…

  20. Knowledge representation and management: towards an integration of a semantic web in daily health practice.

    Science.gov (United States)

    Griffon, N; Charlet, J; Darmoni, Sj

    2013-01-01

    To summarize the best papers in the field of Knowledge Representation and Management (KRM). A synopsis of the four selected articles for the IMIA Yearbook 2013 KRM section is provided, as well as highlights of current KRM trends, in particular, of the semantic web in daily health practice. The manual selection was performed in three stages: first a set of 3,106 articles, then a second set of 86 articles followed by a third set of 15 articles, and finally the last set of four chosen articles. Among the four selected articles (see Table 1), one focuses on knowledge engineering to prevent adverse drug events; the objective of the second is to propose mappings between clinical archetypes and SNOMED CT in the context of clinical practice; the third presents an ontology to create a question-answering system; the fourth describes a biomonitoring network based on semantic web technologies. These four articles clearly indicate that the health semantic web has become a part of daily practice of health professionals since 2012. In the review of the second set of 86 articles, the same topics included in the previous IMIA yearbook remain active research fields: Knowledge extraction, automatic indexing, information retrieval, natural language processing, management of health terminologies and ontologies.

  1. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  2. A Lexical-Ontological Resource for Consumer Healthcare

    Science.gov (United States)

    Cardillo, Elena; Serafini, Luciano; Tamilin, Andrei

    In Consumer Healthcare Informatics it is still difficult for laypeople to find, understand and act on health information, due to the persistent communication gap between specialized medical terminology and that used by healthcare consumers. Furthermore, existing clinically-oriented terminologies cannot provide sufficient support when integrated into consumer-oriented applications, so there is a need to create consumer-friendly terminologies reflecting the different ways healthcare consumers express and think about health topics. Following this direction, this work suggests a way to support the design of an ontology-based system that mitigates this gap, using knowledge engineering and semantic web technologies. The system is based on the development of a consumer-oriented medical terminology that will be integrated with other medical domain ontologies and terminologies into a medical ontology repository. This will support consumer-oriented healthcare systems, such as Personal Health Records, by providing many knowledge services to help users in accessing and managing their healthcare data.

  3. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  4. Annotating spatio-temporal datasets for meaningful analysis in the Web

    Science.gov (United States)

    Stasch, Christoph; Pebesma, Edzer; Scheider, Simon

    2014-05-01

    More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.

  5. Supporting Multi-view User Ontology to Understand Company Value Chains

    Science.gov (United States)

    Zuo, Landong; Salvadores, Manuel; Imtiaz, Sm Hazzaz; Darlington, John; Gibbins, Nicholas; Shadbolt, Nigel R.; Dobree, James

    The objective of the Market Blended Insight (MBI) project is to develop web based techniques to improve the performance of UK Business to Business (B2B) marketing activities. The analysis of company value chains is a fundamental task within MBI because it is an important model for understanding the market place and the company interactions within it. The project has aggregated rich data profiles of 3.7 million companies that form the active UK business community. The profiles are augmented by Web extractions from heterogeneous sources to provide unparalleled business insight. Advances by the Semantic Web in knowledge representation and logic reasoning allow flexible integration of data from heterogeneous sources, transformation between different representations and reasoning about their meaning. The MBI project has identified that the market insight and analysis interests of different types of users are difficult to maintain using a single domain ontology. Therefore, the project has developed a technique to undertake a plurality of analyses of value chains by deploying a distributed multi-view ontology to capture different user views over the classification of companies and their various relationships.

  6. Semantic Web integration of Cheminformatics resources with the SADI framework

    Directory of Open Access Journals (Sweden)

    Chepelev Leonid L

    2011-05-01

    Full Text Available Abstract Background The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web. Results We have created a prototype framework of Semantic Automated Discovery and Integration (SADI framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource

  7. Web 2.0 and Second Language Learning: What Does the Research Tell Us?

    Science.gov (United States)

    Wang, Shenggao; Vasquez, Camilla

    2012-01-01

    This article reviews current research on the use of Web 2.0 technologies in second language (L2) learning. Its purpose is to investigate the theoretical perspectives framing it, to identify some of the benefits of using Web 2.0 technologies in L2 learning, and to discuss some of the limitations. The review reveals that blogs and wikis have been…

  8. An ontology-based, mobile-optimized system for pharmacogenomic decision support at the point-of-care.

    Directory of Open Access Journals (Sweden)

    Jose Antonio Miñarro-Giménez

    Full Text Available The development of genotyping and genetic sequencing techniques and their evolution towards low costs and quick turnaround have encouraged a wide range of applications. One of the most promising applications is pharmacogenomics, where genetic profiles are used to predict the most suitable drugs and drug dosages for the individual patient. This approach aims to ensure appropriate medical treatment and avoid, or properly manage, undesired side effects.We developed the Medicine Safety Code (MSC service, a novel pharmacogenomics decision support system, to provide physicians and patients with the ability to represent pharmacogenomic data in computable form and to provide pharmacogenomic guidance at the point-of-care. Pharmacogenomic data of individual patients are encoded as Quick Response (QR codes and can be decoded and interpreted with common mobile devices without requiring a centralized repository for storing genetic patient data. In this paper, we present the first fully functional release of this system and describe its architecture, which utilizes Web Ontology Language 2 (OWL 2 ontologies to formalize pharmacogenomic knowledge and to provide clinical decision support functionalities.The MSC system provides a novel approach for enabling the implementation of personalized medicine in clinical routine.

  9. ``Force,'' ontology, and language

    Science.gov (United States)

    Brookes, David T.; Etkina, Eugenia

    2009-06-01

    We introduce a linguistic framework through which one can interpret systematically students’ understanding of and reasoning about force and motion. Some researchers have suggested that students have robust misconceptions or alternative frameworks grounded in everyday experience. Others have pointed out the inconsistency of students’ responses and presented a phenomenological explanation for what is observed, namely, knowledge in pieces. We wish to present a view that builds on and unifies aspects of this prior research. Our argument is that many students’ difficulties with force and motion are primarily due to a combination of linguistic and ontological difficulties. It is possible that students are primarily engaged in trying to define and categorize the meaning of the term “force” as spoken about by physicists. We found that this process of negotiation of meaning is remarkably similar to that engaged in by physicists in history. In this paper we will describe a study of the historical record that reveals an analogous process of meaning negotiation, spanning multiple centuries. Using methods from cognitive linguistics and systemic functional grammar, we will present an analysis of the force and motion literature, focusing on prior studies with interview data. We will then discuss the implications of our findings for physics instruction.

  10. Sistem Pencarian Informasi Berbasis Ontologi untuk Jalur Pendakian Gunung Menggunakan Query Bahasa Alami dengan Penyajian Peta Interaktif

    Directory of Open Access Journals (Sweden)

    Fadhila Tangguh Admojo

    2017-01-01

                This research aims to provide a solution to the problems faced by climbers, by developing an information retrieval system for mountain climbing path using semantic technology (ontology based approach .    The system is developed by using two knowledge base (ontology, ontology Bahasa represents linguistic knowledge and ontology Mountaineering represents mountaineering knowledge. The system is designed to process and understand natural language input form. The process of understanding the natural language based on syntactic and semantic analysis using the rules of Indonesian grammar.             The results of the research that has been conducted shows that the system is able to understand natural language input and is capable of detecting input that is not in accordance with the rules of Indonesian grammar both syntactically and semantically. The system is also able to use a thesaurus of words in the search process. Quantitative test results show that the system is able to understand 69% of inputs are taken at random from the respondents.

  11. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    Science.gov (United States)

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  12. Desiderata for ontologies to be used in semantic annotation of biomedical documents.

    Science.gov (United States)

    Bada, Michael; Hunter, Lawrence

    2011-02-01

    A wealth of knowledge valuable to the translational research scientist is contained within the vast biomedical literature, but this knowledge is typically in the form of natural language. Sophisticated natural-language-processing systems are needed to translate text into unambiguous formal representations grounded in high-quality consensus ontologies, and these systems in turn rely on gold-standard corpora of annotated documents for training and testing. To this end, we are constructing the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-text biomedical journal articles that are being manually annotated with the entire sets of terms from select vocabularies, predominantly from the Open Biomedical Ontologies (OBO) library. Our efforts in building this corpus has illuminated infelicities of these ontologies with respect to the semantic annotation of biomedical documents, and we propose desiderata whose implementation could substantially improve their utility in this task; these include the integration of overlapping terms across OBOs, the resolution of OBO-specific ambiguities, the integration of the BFO with the OBOs and the use of mid-level ontologies, the inclusion of noncanonical instances, and the expansion of relations and realizable entities. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. 8th Chinese Conference on The Semantic Web and Web Science

    CERN Document Server

    Du, Jianfeng; Wang, Haofen; Wang, Peng; Ji, Donghong; Pan, Jeff Z; CSWS 2014

    2014-01-01

    This book constitutes the thoroughly refereed papers of the 8th Chinese Conference on The Semantic Web and Web Science, CSWS 2014, held in Wuhan, China, in August 2014. The 22 research papers presented were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections such as ontology reasoning and learning; semantic data generation and management; and semantic technology and applications.

  14. Ontology Update in the Cognitive Model of Ontology Learning

    Directory of Open Access Journals (Sweden)

    Zhang De-Hai

    2016-01-01

    Full Text Available Ontology has been used in many hot-spot fields, but most ontology construction methods are semiautomatic, and the construction process of ontology is still a tedious and painstaking task. In this paper, a kind of cognitive models is presented for ontology learning which can simulate human being’s learning from world. In this model, the cognitive strategies are applied with the constrained axioms. Ontology update is a key step when the new knowledge adds into the existing ontology and conflict with old knowledge in the process of ontology learning. This proposal designs and validates the method of ontology update based on the axiomatic cognitive model, which include the ontology update postulates, axioms and operations of the learning model. It is proved that these operators subject to the established axiom system.

  15. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    Science.gov (United States)

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2015-01-01

    There remain significant difficulties selecting probable candidate drugs from existing databases. We describe an ontology-oriented approach to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. We also report a case study in which we attempted to explore candidate drugs effective for bipolar disorder and epilepsy. We constructed an ontology incorporating knowledge between the two diseases and performed semantic reasoning tasks with the ontology. The results suggested 48 candidate drugs that hold promise for further breakthrough. The evaluation demonstrated the validity our approach. Our approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders.

  16. CAMINO HACIA LA WEB SEMÁNTICA

    Directory of Open Access Journals (Sweden)

    Jorge Alejandro Castillo Morales

    2006-01-01

    Full Text Available El rápido crecimiento de la Word Wide Web ocasiona que sea cada vez más difícil buscar, extraer, interpretar y procesar información de la Web. Como una alternativa a este problema se está desarrollando la Web Semántica - una nueva tecnología que hace que el contenido de la Web sea más significativo a las aplicaciones de software. En la Web Semántica se aumentan anotaciones que expresan el significado de los datos en las páginas Web. Para que estas anotaciones sean útiles, es necesario un entendimiento compartido (entre sus creadores y los usuarios de anotaciones precisamente definidas. Con este propósito se utilizan ontologías – definición de conceptos importantes en un dominio de conocimiento y de las propiedades de cada concepto. Las ontologías permiten definir terminologías y expresar propiedades semánticas. Como resultado, la Web Semántica promete proveer un nivel de automatización e integración que es imposible para la Web actual. Asimismo, la Web Semántica va a poder ejecutar consultas avanzadas que requieren conocimiento de soporte para su resolución.

  17. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A two-staged approach to developing and evaluating an ontology for delivering personalized education to diabetic patients.

    Science.gov (United States)

    Quinn, Susan; Bond, Raymond; Nugent, Chris

    2018-09-01

    Ontologies are often used in biomedical and health domains to provide a concise and consistent means of attributing meaning to medical terminology. While they are novices in terms of ontology engineering, the evaluation of an ontology by domain specialists provides an opportunity to enhance its objectivity, accuracy, and coverage of the domain itself. This paper provides an evaluation of the viability of using ontology engineering novices to evaluate and enrich an ontology that can be used for personalized diabetic patient education. We describe a methodology for engaging healthcare and information technology specialists with a range of ontology engineering tasks. We used 87.8% of the data collected to validate the accuracy of our ontological model. The contributions also enabled a 16% increase in the class size and an 18% increase in object properties. Furthermore, we propose that ontology engineering novices can make valuable contributions to ontology development. Application-specific evaluation of the ontology using a semantic-web-based architecture is also discussed.

  19. Rancang Bangun Plugin Protégé Menggunakan Ekspresi SPARQL-DL Dengan Masukan Bahasa Alami

    Directory of Open Access Journals (Sweden)

    Muhammad Fahrurrozi

    2017-07-01

    Full Text Available Semantic web is a technology that allows us to build a knowledge base or ontology for the information of the web page can be understood by computers. One software for building ontology-based semantic web is a protégé. Protege allows developers to develop an ontology with an expression of logic description. Protégé provides a plugin such as DL-Query and SPARQL-Query to display information that involve expression of class, property and individual in the ontology. The problem that then arises is DL-plugin Query only able to process the rules that involve expression of class to any object property, despite being equipped with the function of reasoning. while the SPARQL-Query plugin does not have reasoning abilities such as DL-Query plugin although the SPARQL-Query plugin can query memperoses rules involving class, property and individual. This research resulted in a new plugin using SPARQL-DL with input natural language as a protégé not provide a plugin with input natural language to see results from the combined expression-expression contained in the ontology that allows developers to view information ontology language that is easier to understand without having think of SPARQL query structure is complicated.

  20. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  1. USE OF ONTOLOGIES FOR KNOWLEDGE BASES CREATION TUTORING COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Cheremisina Lyubov

    2014-11-01

    Full Text Available This paper deals with the use of ontology for the use and development of intelligent tutoring systems. We consider the shortcomings of educational software and distance learning systems and the advantages of using ontology’s in their design. Actuality creates educational computer systems based on systematic knowledge. We consider classification of properties, use and benefits of ontology’s. Characterized approaches to the problem of ontology mapping, the first of which – manual mapping, the second – a comparison of the names of concepts based on their lexical similarity and using special dictionaries. The analysis of languages available for the formal description of ontology. Considered a formal mathematical model of ontology’s and ontology consistency problem, which is that different developers for the same domain ontology can be created, syntactically or semantically heterogeneous, and their use requires a compatible broadcast or display. An algorithm combining ontology’s. The characteristic of the practical value of developing an ontology for electronic educational resources and recommendations for further research and development, such as implementation of other components of the system integration, formalization of the processes of integration and development of a universal expansion algorithms ontology’s software

  2. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data.

    Science.gov (United States)

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-05-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.

  3. Identification of protein features encoded by alternative exons using Exon Ontology.

    Science.gov (United States)

    Tranchevent, Léon-Charles; Aubé, Fabien; Dulaurier, Louis; Benoit-Pilven, Clara; Rey, Amandine; Poret, Arnaud; Chautard, Emilie; Mortada, Hussein; Desmet, François-Olivier; Chakrama, Fatima Zahra; Moreno-Garcia, Maira Alejandra; Goillot, Evelyne; Janczarski, Stéphane; Mortreux, Franck; Bourgeois, Cyril F; Auboeuf, Didier

    2017-06-01

    Transcriptomic genome-wide analyses demonstrate massive variation of alternative splicing in many physiological and pathological situations. One major challenge is now to establish the biological contribution of alternative splicing variation in physiological- or pathological-associated cellular phenotypes. Toward this end, we developed a computational approach, named "Exon Ontology," based on terms corresponding to well-characterized protein features organized in an ontology tree. Exon Ontology is conceptually similar to Gene Ontology-based approaches but focuses on exon-encoded protein features instead of gene level functional annotations. Exon Ontology describes the protein features encoded by a selected list of exons and looks for potential Exon Ontology term enrichment. By applying this strategy to exons that are differentially spliced between epithelial and mesenchymal cells and after extensive experimental validation, we demonstrate that Exon Ontology provides support to discover specific protein features regulated by alternative splicing. We also show that Exon Ontology helps to unravel biological processes that depend on suites of coregulated alternative exons, as we uncovered a role of epithelial cell-enriched splicing factors in the AKT signaling pathway and of mesenchymal cell-enriched splicing factors in driving splicing events impacting on autophagy. Freely available on the web, Exon Ontology is the first computational resource that allows getting a quick insight into the protein features encoded by alternative exons and investigating whether coregulated exons contain the same biological information. © 2017 Tranchevent et al.; Published by Cold Spring Harbor Laboratory Press.

  4. A Uniform Ontology for Software Interfaces

    Science.gov (United States)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  5. Phenex: ontological annotation of phenotypic diversity.

    Directory of Open Access Journals (Sweden)

    James P Balhoff

    2010-05-01

    Full Text Available Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge.Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices.Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  6. Phenex: ontological annotation of phenotypic diversity.

    Science.gov (United States)

    Balhoff, James P; Dahdul, Wasila M; Kothari, Cartik R; Lapp, Hilmar; Lundberg, John G; Mabee, Paula; Midford, Peter E; Westerfield, Monte; Vision, Todd J

    2010-05-05

    Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge. Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices. Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  7. Using ontology network structure in text mining.

    Science.gov (United States)

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  8. Comparative GO: a web application for comparative gene ontology and gene ontology-based gene selection in bacteria.

    Directory of Open Access Journals (Sweden)

    Mario Fruzangohar

    Full Text Available The primary means of classifying new functions for genes and proteins relies on Gene Ontology (GO, which defines genes/proteins using a controlled vocabulary in terms of their Molecular Function, Biological Process and Cellular Component. The challenge is to present this information to researchers to compare and discover patterns in multiple datasets using visually comprehensible and user-friendly statistical reports. Importantly, while there are many GO resources available for eukaryotes, there are none suitable for simultaneous, graphical and statistical comparison between multiple datasets. In addition, none of them supports comprehensive resources for bacteria. By using Streptococcus pneumoniae as a model, we identified and collected GO resources including genes, proteins, taxonomy and GO relationships from NCBI, UniProt and GO organisations. Then, we designed database tables in PostgreSQL database server and developed a Java application to extract data from source files and loaded into database automatically. We developed a PHP web application based on Model-View-Control architecture, used a specific data structure as well as current and novel algorithms to estimate GO graphs parameters. We designed different navigation and visualization methods on the graphs and integrated these into graphical reports. This tool is particularly significant when comparing GO groups between multiple samples (including those of pathogenic bacteria from different sources simultaneously. Comparing GO protein distribution among up- or down-regulated genes from different samples can improve understanding of biological pathways, and mechanism(s of infection. It can also aid in the discovery of genes associated with specific function(s for investigation as a novel vaccine or therapeutic targets.http://turing.ersa.edu.au/BacteriaGO.

  9. Vaxjo: A Web-Based Vaccine Adjuvant Database and Its Application for Analysis of Vaccine Adjuvants and Their Uses in Vaccine Development

    Directory of Open Access Journals (Sweden)

    Samantha Sayers

    2012-01-01

    Full Text Available Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO in the Web Ontology Language (OWL format.

  10. Vaxjo: a web-based vaccine adjuvant database and its application for analysis of vaccine adjuvants and their uses in vaccine development.

    Science.gov (United States)

    Sayers, Samantha; Ulysse, Guerlain; Xiang, Zuoshuang; He, Yongqun

    2012-01-01

    Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO) in the Web Ontology Language (OWL) format.

  11. Standardized terminology for clinical trial protocols based on top-level ontological categories.

    Science.gov (United States)

    Heller, B; Herre, H; Lippoldt, K; Loeffler, M

    2004-01-01

    This paper describes a new method for the ontologically based standardization of concepts with regard to the quality assurance of clinical trial protocols. We developed a data dictionary for medical and trial-specific terms in which concepts and relations are defined context-dependently. The data dictionary is provided to different medical research networks by means of the software tool Onto-Builder via the internet. The data dictionary is based on domain-specific ontologies and the top-level ontology of GOL. The concepts and relations described in the data dictionary are represented in natural language, semi-formally or formally according to their use.

  12. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    Science.gov (United States)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  13. WebGIS based on semantic grid model and web services

    Science.gov (United States)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by

  14. An ontology-driven semantic mashup of gene and biological pathway information: application to the domain of nicotine dependence.

    Science.gov (United States)

    Sahoo, Satya S; Bodenreider, Olivier; Rutter, Joni L; Skinner, Karen J; Sheth, Amit P

    2008-10-01

    This paper illustrates how Semantic Web technologies (especially RDF, OWL, and SPARQL) can support information integration and make it easy to create semantic mashups (semantically integrated resources). In the context of understanding the genetic basis of nicotine dependence, we integrate gene and pathway information and show how three complex biological queries can be answered by the integrated knowledge base. We use an ontology-driven approach to integrate two gene resources (Entrez Gene and HomoloGene) and three pathway resources (KEGG, Reactome and BioCyc), for five organisms, including humans. We created the Entrez Knowledge Model (EKoM), an information model in OWL for the gene resources, and integrated it with the extant BioPAX ontology designed for pathway resources. The integrated schema is populated with data from the pathway resources, publicly available in BioPAX-compatible format, and gene resources for which a population procedure was created. The SPARQL query language is used to formulate queries over the integrated knowledge base to answer the three biological queries. Simple SPARQL queries could easily identify hub genes, i.e., those genes whose gene products participate in many pathways or interact with many other gene products. The identification of the genes expressed in the brain turned out to be more difficult, due to the lack of a common identification scheme for proteins. Semantic Web technologies provide a valid framework for information integration in the life sciences. Ontology-driven integration represents a flexible, sustainable and extensible solution to the integration of large volumes of information. Additional resources, which enable the creation of mappings between information sources, are required to compensate for heterogeneity across namespaces. RESOURCE PAGE: http://knoesis.wright.edu/research/lifesci/integration/structured_data/JBI-2008/

  15. SSDOnt: An Ontology for Representing Single-Subject Design Studies.

    Science.gov (United States)

    Berges, Idoia; Bermúdez, Jesus; Illarramendi, Arantza

    2018-02-01

    Single-Subject Design is used in several areas such as education and biomedicine. However, no suited formal vocabulary exists for annotating the detailed configuration and the results of this type of research studies with the appropriate granularity for looking for information about them. Therefore, the search for those study designs relies heavily on a syntactical search on the abstract, keywords or full text of the publications about the study, which entails some limitations. To present SSDOnt, a specific purpose ontology for describing and annotating single-subject design studies, so that complex questions can be asked about them afterwards. The ontology was developed following the NeOn methodology. Once the requirements of the ontology were defined, a formal model was described in a Description Logic and later implemented in the ontology language OWL 2 DL. We show how the ontology provides a reference model with a suitable terminology for the annotation and searching of single-subject design studies and their main components, such as the phases, the intervention types, the outcomes and the results. Some mappings with terms of related ontologies have been established. We show as proof-of-concept that classes in the ontology can be easily extended to annotate more precise information about specific interventions and outcomes such as those related to autism. Moreover, we provide examples of some types of queries that can be posed to the ontology. SSDOnt has achieved the purpose of covering the descriptions of the domain of single-subject research studies. Schattauer GmbH.

  16. An Ontology of Quality Initiatives and a Model for Decentralized, Collaborative Quality Management on the (Semantic) World Wide Web

    Science.gov (United States)

    2001-01-01

    This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be. PMID:11772549

  17. ELLIPS: providing web-based language learning for Higher Education in the Netherlands

    NARCIS (Netherlands)

    Corda, A.; Jager, S.

    2004-01-01

    This paper presents the overall considerations and pedagogical approach which were at the basis of the development of an innovative web-based CALL application, Ellips (Electronic Language Learning Interactive Practising System). It describes the program’s most salient features, illustrating in

  18. Populating the Semantic Web by Macro-reading Internet Text

    Science.gov (United States)

    Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard

    A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.

  19. GOPET: A tool for automated predictions of Gene Ontology terms

    Directory of Open Access Journals (Sweden)

    Glatting Karl-Heinz

    2006-03-01

    Full Text Available Abstract Background Vast progress in sequencing projects has called for annotation on a large scale. A Number of methods have been developed to address this challenging task. These methods, however, either apply to specific subsets, or their predictions are not formalised, or they do not provide precise confidence values for their predictions. Description We recently established a learning system for automated annotation, trained with a broad variety of different organisms to predict the standardised annotation terms from Gene Ontology (GO. Now, this method has been made available to the public via our web-service GOPET (Gene Ontology term Prediction and Evaluation Tool. It supplies annotation for sequences of any organism. For each predicted term an appropriate confidence value is provided. The basic method had been developed for predicting molecular function GO-terms. It is now expanded to predict biological process terms. This web service is available via http://genius.embnet.dkfz-heidelberg.de/menu/biounit/open-husar Conclusion Our web service gives experimental researchers as well as the bioinformatics community a valuable sequence annotation device. Additionally, GOPET also provides less significant annotation data which may serve as an extended discovery platform for the user.

  20. The Ontology Lookup Service, a lightweight cross-platform tool for controlled vocabulary queries

    Directory of Open Access Journals (Sweden)

    Apweiler Rolf

    2006-02-01

    Full Text Available Abstract Background With the vast amounts of biomedical data being generated by high-throughput analysis methods, controlled vocabularies and ontologies are becoming increasingly important to annotate units of information for ease of search and retrieval. Each scientific community tends to create its own locally available ontology. The interfaces to query these ontologies tend to vary from group to group. We saw the need for a centralized location to perform controlled vocabulary queries that would offer both a lightweight web-accessible user interface as well as a consistent, unified SOAP interface for automated queries. Results The Ontology Lookup Service (OLS was created to integrate publicly available biomedical ontologies into a single database. All modified ontologies are updated daily. A list of currently loaded ontologies is available online. The database can be queried to obtain information on a single term or to browse a complete ontology using AJAX. Auto-completion provides a user-friendly search mechanism. An AJAX-based ontology viewer is available to browse a complete ontology or subsets of it. A programmatic interface is available to query the webservice using SOAP. The service is described by a WSDL descriptor file available online. A sample Java client to connect to the webservice using SOAP is available for download from SourceForge. All OLS source code is publicly available under the open source Apache Licence. Conclusion The OLS provides a user-friendly single entry point for publicly available ontologies in the Open Biomedical Ontology (OBO format. It can be accessed interactively or programmatically at http://www.ebi.ac.uk/ontology-lookup/.

  1. Physical properties of biological entities: an introduction to the ontology of physics for biology.

    Directory of Open Access Journals (Sweden)

    Daniel L Cook

    Full Text Available As biomedical investigators strive to integrate data and analyses across spatiotemporal scales and biomedical domains, they have recognized the benefits of formalizing languages and terminologies via computational ontologies. Although ontologies for biological entities-molecules, cells, organs-are well-established, there are no principled ontologies of physical properties-energies, volumes, flow rates-of those entities. In this paper, we introduce the Ontology of Physics for Biology (OPB, a reference ontology of classical physics designed for annotating biophysical content of growing repositories of biomedical datasets and analytical models. The OPB's semantic framework, traceable to James Clerk Maxwell, encompasses modern theories of system dynamics and thermodynamics, and is implemented as a computational ontology that references available upper ontologies. In this paper we focus on the OPB classes that are designed for annotating physical properties encoded in biomedical datasets and computational models, and we discuss how the OPB framework will facilitate biomedical knowledge integration.

  2. Physical properties of biological entities: an introduction to the ontology of physics for biology.

    Science.gov (United States)

    Cook, Daniel L; Bookstein, Fred L; Gennari, John H

    2011-01-01

    As biomedical investigators strive to integrate data and analyses across spatiotemporal scales and biomedical domains, they have recognized the benefits of formalizing languages and terminologies via computational ontologies. Although ontologies for biological entities-molecules, cells, organs-are well-established, there are no principled ontologies of physical properties-energies, volumes, flow rates-of those entities. In this paper, we introduce the Ontology of Physics for Biology (OPB), a reference ontology of classical physics designed for annotating biophysical content of growing repositories of biomedical datasets and analytical models. The OPB's semantic framework, traceable to James Clerk Maxwell, encompasses modern theories of system dynamics and thermodynamics, and is implemented as a computational ontology that references available upper ontologies. In this paper we focus on the OPB classes that are designed for annotating physical properties encoded in biomedical datasets and computational models, and we discuss how the OPB framework will facilitate biomedical knowledge integration. © 2011 Cook et al.

  3. Querying archetype-based EHRs by search ontology-based XPath engineering.

    Science.gov (United States)

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  4. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    Science.gov (United States)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  5. OntoCAT--simple ontology search and integration in Java, R and REST/JavaScript.

    Science.gov (United States)

    Adamusiak, Tomasz; Burdett, Tony; Kurbatova, Natalja; Joeri van der Velde, K; Abeygunawardena, Niran; Antonakaki, Despoina; Kapushesky, Misha; Parkinson, Helen; Swertz, Morris A

    2011-05-29

    Ontologies have become an essential asset in the bioinformatics toolbox and a number of ontology access resources are now available, for example, the EBI Ontology Lookup Service (OLS) and the NCBO BioPortal. However, these resources differ substantially in mode, ease of access, and ontology content. This makes it relatively difficult to access each ontology source separately, map their contents to research data, and much of this effort is being replicated across different research groups. OntoCAT provides a seamless programming interface to query heterogeneous ontology resources including OLS and BioPortal, as well as user-specified local OWL and OBO files. Each resource is wrapped behind easy to learn Java, Bioconductor/R and REST web service commands enabling reuse and integration of ontology software efforts despite variation in technologies. It is also available as a stand-alone MOLGENIS database and a Google App Engine application. OntoCAT provides a robust, configurable solution for accessing ontology terms specified locally and from remote services, is available as a stand-alone tool and has been tested thoroughly in the ArrayExpress, MOLGENIS, EFO and Gen2Phen phenotype use cases. http://www.ontocat.org.

  6. Using concept similarity in cross ontology for adaptive e-Learning systems

    Directory of Open Access Journals (Sweden)

    B. Saleena

    2015-01-01

    Full Text Available e-Learning is one of the most preferred media of learning by the learners. The learners search the web to gather knowledge about a particular topic from the information in the repositories. Retrieval of relevant materials from a domain can be easily implemented if the information is organized and related in some way. Ontologies are a key concept that helps us to relate information for providing the more relevant lessons to the learner. This paper proposes an adaptive e-Learning system, which generates a user specific e-Learning content by comparing the concepts with more than one system using similarity measures. A cross ontology measure is defined, which consists of fuzzy domain ontology as the primary ontology and the domain expert’s ontology as the secondary ontology, for the comparison process. A personalized document is provided to the user with a user profile, which includes the data obtained from the processing of the proposed method under a User score, which is obtained through the user evaluation. The results of the proposed e-Learning system under the designed cross ontology similarity measure show a significant increase in performance and accuracy under different conditions. The assessment of the comparative analysis, showed the difference in performance of our proposed method over other methods. Based on the assessment results it is proved that the proposed approach is effective over other methods.

  7. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  8. Ontological foundations for evolutionary economics: A Darwinian social ontology

    NARCIS (Netherlands)

    Stoelhorst, J.W.

    2008-01-01

    The purpose of this paper is to further the project of generalized Darwinism by developing a social ontology on the basis of a combined commitment to ontological continuity and ontological commonality. Three issues that are central to the development of a social ontology are addressed: (1) the

  9. Integrating systems biology models and biomedical ontologies.

    Science.gov (United States)

    Hoehndorf, Robert; Dumontier, Michel; Gennari, John H; Wimalaratne, Sarala; de Bono, Bernard; Cook, Daniel L; Gkoutos, Georgios V

    2011-08-11

    Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms.

  10. An ontology-driven semantic mash-up of gene and biological pathway information: Application to the domain of nicotine dependence

    Science.gov (United States)

    Sahoo, Satya S.; Bodenreider, Olivier; Rutter, Joni L.; Skinner, Karen J.; Sheth, Amit P.

    2008-01-01

    Objectives This paper illustrates how Semantic Web technologies (especially RDF, OWL, and SPARQL) can support information integration and make it easy to create semantic mashups (semantically integrated resources). In the context of understanding the genetic basis of nicotine dependence, we integrate gene and pathway information and show how three complex biological queries can be answered by the integrated knowledge base. Methods We use an ontology-driven approach to integrate two gene resources (Entrez Gene and HomoloGene) and three pathway resources (KEGG, Reactome and BioCyc), for five organisms, including humans. We created the Entrez Knowledge Model (EKoM), an information model in OWL for the gene resources, and integrated it with the extant BioPAX ontology designed for pathway resources. The integrated schema is populated with data from the pathway resources, publicly available in BioPAX-compatible format, and gene resources for which a population procedure was created. The SPARQL query language is used to formulate queries over the integrated knowledge base to answer the three biological queries. Results Simple SPARQL queries could easily identify hub genes, i.e., those genes whose gene products participate in many pathways or interact with many other gene products. The identification of the genes expressed in the brain turned out to be more difficult, due to the lack of a common identification scheme for proteins. Conclusion Semantic Web technologies provide a valid framework for information integration in the life sciences. Ontology-driven integration represents a flexible, sustainable and extensible solution to the integration of large volumes of information. Additional resources, which enable the creation of mappings between information sources, are required to compensate for heterogeneity across namespaces. Resource page http://knoesis.wright.edu/research/lifesci/integration/structured_data/JBI-2008/ PMID:18395495

  11. MIRO: guidelines for minimum information for the reporting of an ontology.

    Science.gov (United States)

    Matentzoglu, Nicolas; Malone, James; Mungall, Chris; Stevens, Robert

    2018-01-18

    Creation and use of ontologies has become a mainstream activity in many disciplines, in particular, the biomedical domain. Ontology developers often disseminate information about these ontologies in peer-reviewed ontology description reports. There appears to be, however, a high degree of variability in the content of these reports. Often, important details are omitted such that it is difficult to gain a sufficient understanding of the ontology, its content and method of creation. We propose the Minimum Information for Reporting an Ontology (MIRO) guidelines as a means to facilitate a higher degree of completeness and consistency between ontology documentation, including published papers, and ultimately a higher standard of report quality. A draft of the MIRO guidelines was circulated for public comment in the form of a questionnaire, and we subsequently collected 110 responses from ontology authors, developers, users and reviewers. We report on the feedback of this consultation, including comments on each guideline, and present our analysis on the relative importance of each MIRO information item. These results were used to update the MIRO guidelines, mainly by providing more detailed operational definitions of the individual items and assigning degrees of importance. Based on our revised version of MIRO, we conducted a review of 15 recently published ontology description reports from three important journals in the Semantic Web and Biomedical domain and analysed them for compliance with the MIRO guidelines. We found that only 41.38% of the information items were covered by the majority of the papers (and deemed important by the survey respondents) and a large number of important items are not covered at all, like those related to testing and versioning policies. We believe that the community-reviewed MIRO guidelines can contribute to improving significantly the quality of ontology description reports and other documentation, in particular by increasing consistent

  12. IBRI-CASONTO: Ontology-based semantic search engine

    Directory of Open Access Journals (Sweden)

    Awny Sayed

    2017-11-01

    Full Text Available The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic Language. It has a complex morphological, grammatical and semantic aspects, as it is a highly inflectional and derivational language. In this paper, we try to highlight and present an Ontological Search Engine called IBRI-CASONTO for Colleges of Applied Sciences, Oman. Our proposed engine supports both Arabic and English language. It is also employed two types of search which are a keyword-based search and a semantics-based search. IBRI-CASONTO is based on different technologies such as Resource Description Framework (RDF data and Ontological graph. The experiments represent in two sections, first it shows a comparison among Entity-Search and the Classical-Search inside the IBRI-CASONTO itself, second it compares the Entity-Search of IBRI-CASONTO with currently used search engines, such as Kngine, Wolfram Alpha and the most popular engine nowadays Google, in order to measure their performance and efficiency.

  13. Does the Test Work? Evaluating a Web-Based Language Placement Test

    Science.gov (United States)

    Long, Avizia Y.; Shin, Sun-Young; Geeslin, Kimberly; Willis, Erik W.

    2018-01-01

    In response to the need for examples of test validation from which everyday language programs can benefit, this paper reports on a study that used Bachman's (2005) assessment use argument (AUA) framework to examine evidence to support claims made about the intended interpretations and uses of scores based on a new web-based Spanish language…

  14. Ontology-Based Information Visualization: Toward Semantic Web Applications

    NARCIS (Netherlands)

    Fluit, Christiaan; Sabou, Marta; Harmelen, Frank van

    2006-01-01

    The Semantic Web is an extension of the current World Wide Web, based on the idea of exchanging information with explicit, formal, and machine-accessible descriptions of meaning. Providing information with such semantics will enable the construction of applications that have an increased awareness

  15. VOILA 2015 Visualizations and User Interfaces for Ontologies and Linked Data : Proceedings of the International Workshop on Visualizations and User Interfaces for Ontologies and Linked Data

    OpenAIRE

    2015-01-01

    A picture is worth a thousand words, we often say, yet many areas are in demand of sophisticated visualization techniques, and the Semantic Web is not an exception. The size and complexity of ontologies and Linked Data in the Semantic Web constantly grow and the diverse backgrounds of the users and application areas multiply at the same time. Providing users with visual representations and intuitive user interfaces can significantly aid the understanding of the domains and knowledge represent...

  16. Supporting collaboration in interdisciplinary research of water–energy–food nexus by means of ontology engineering

    Directory of Open Access Journals (Sweden)

    Terukazu Kumazawa

    2017-06-01

    The introduction of ontology engineering approach will enable us to share a common language and a common theoretical basis. But the development of the new method based on ontology engineering is necessary. For example, knowledge structuring according to each perspective of researchers and simple figure accompanied with a reasoned argument in the background are the directions of tool development.

  17. Social networking for language learners: Creating meaningful output with Web 2.0 tools

    Directory of Open Access Journals (Sweden)

    Robert Chartrand

    2012-03-01

    Full Text Available The Internet has the potential to provide language learners with vast resources of authentic written, audio, and video materials to supplement lessons. Educators can find a wide assortment of materials for learners to study in class or after class for independent learning and to encourage learner autonomy. More recently, however, the immense popularity of social networking websites has created new opportunities for language learners to interact in authentic ways that were previously difficult to achieve. Advances in technology mean that today, learners of a language can easily interact with their peers in meaningful practice that helps foster language acquisition and motivation. That is, tasks that make use of Web 2.0 interactivity can significantly raise students’ potential to generate meaningful output and stimulate their interest in language learning.

  18. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  19. A literature-based approach to annotation and browsing of Web resources

    Directory of Open Access Journals (Sweden)

    Miguel A. Sicilia

    2003-01-01

    Full Text Available The emerging Semantic Web technologies critically depend on the availability of shared knowledge representations called ontologies, which are intended to encode consensual knowledge about specific domains. Currently, the proposed processes for building and maintaining those ontologies entail the joint effort of groups of representative domain experts, which can be expensive in terms of co-ordination and in terms of time to reach consensus.In this paper, literature-based ontologies, which can be initially developed by a single expert and maintained continuously, are proposed as preliminary alternatives to group-generated domain ontologies, or as early versions for them. These ontologies encode domain knowledge in the form of terms and relations along with the (formal or informal bibliographical resources that define or deal with them, which makes them specially useful for domains in which a common terminology or jargon is not soundly established. A general-purpose metamodelling framework for literature-based ontologies - which has been used in two concrete domains - is described, along with a proposed methodology and a specific resource annotation approach. In addition, the implementation of an RDF-based Web resource browser - that uses the ontologies to guide the user in the exploration of a corpus of digital resources- is presented as a proof of concept.

  20. Ontology Translation: The Semiotic Engineering of Content Management Systems

    Directory of Open Access Journals (Sweden)

    Alejandro Villamarin M.

    2015-12-01

    Full Text Available The present paper proposes the application of Semiotic Engineering theory to Content Management Systems (CMS focusing on the analysis of how the use of different ontologies can affect the user’s efficiency when performing tasks in a CMS. The analysis is performed using the theoretical semiotic model Web-Semiotic Interface Design Evaluation (W-SIDE model.

  1. Shifting ontologies of a serious game and its relationships with English education for beginners

    DEFF Research Database (Denmark)

    Hansbøl, Mikala; Meyer, Bente

    2011-01-01

      This paper takes its point of departure in a language project, which is a subproject under the larger ongoing (2007-2011) research project Serious Games on a Global Market Place. The language project follows how the virtual universe known as Mingoville ( http://www.mingoville.com/ ) becomes an ...... and learning situations of English for beginners. Keywords: Entanglement approach, relational ontology, serious games, teaching and learning English for beginners, educational technology research...... an actor in English education for beginners. The virtual universe provides an online environment for students beginning to learn English in schools and at home. This paper will focus on the shifting ontologies of Mingoville and teaching and learning situations in beginners' English. This paper takes its......  This paper takes its point of departure in a language project, which is a subproject under the larger ongoing (2007-2011) research project Serious Games on a Global Market Place. The language project follows how the virtual universe known as Mingoville ( http://www.mingoville.com/ ) becomes...

  2. Taxonomía, ontología y folksonomía, ¿qué son y qué beneficios u oportunidades presentan para los usuarios de la web?

    Directory of Open Access Journals (Sweden)

    Flor Nancy Díaz Piraquive

    2009-05-01

    Full Text Available Many persons, public entities and especially private entities are trying to obtain the best out of the use of the technological infrastructure in information and communication. This technology is acquired not only as a tool for the development of processes and activities in their daily tasks but also as an opportunity to build knowledge by means of collaborative learning. This article briefl y describes how topics related with taxonomy, ontology, and folksonomy contribute to the generation of new knowledge in an appropriate manner. Several elements such as ¿what are they? , ¿who uses them? ¿what benefi ts do they bring? And what opportunities they bring to users of the web. Some of the important considerations on taxonomies show how these go on to be the science that deals with the principals, methods and purpose of the classifi cation to become the technology used for an effi cient management of the information and contents. Taxonomy is the essential element in the building of knowledge within the organizations. Regarding the ontologies, we will show how based on them we are able to defi ne vocabularies that may be understood and specifi ed by computer units with enough precision to allow differentiating terms and referencing them in a precise manner, thus making the search in web easier and optimizing the users’ resources. Lastly, on folksonomies we will show that it is a manner of taking advantage of the knowledge people have in an organic and democratic manner by organizing and classifying the information that travels through the Internet based on a collaborative environment through agreements that lead to the achievement of a common goal. This article is directed towards those who are interested in the current topics such as taxonomies, ontologies and folksonomies.

  3. The Digital Exhibition and Keyimage Ontology

    OpenAIRE

    Airchinnigh, Mícheál Mac an; Sotirova, Kalina

    2008-01-01

    The Age of Image predates and is currently contemporaneous with the Information Age. In our times the explosive expansion of Web 2.0 Social Space, typified by the phenomena of De.licio.us, Flickr, MySpace, YouTube…, and the concomitant emergence of folksonomy, present interesting challenges in the management of this information. One key process by which to accomplish this in Social Space, is the wedding of folksonomy (of the people) with ontology (of the machine). Such a wedding must necessar...

  4. Semantics of immersive web through its architectural structure and graphic primitives

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2010-12-01

    Full Text Available Currently, practices and tools for computer-aided three-dimensional design, do not allow the semantic description of objects constructed in some cases specified notations as handling layers, or labeling of each development itself. The lack of a standard for the description of the elements represents a major drawback for using advanced three-dimensional environments such as the automation of search and construction processes that require semantic knowledge of its elements.This project proposes the development the semantic composition from the hierarchy of three-dimensional visualization of graphics primitives used to construct three-dimensional objects, taking into account the geometric composition architecture of standard 19775-1 of the International Electrotechnical Commission of the International Organization for StandardizationFor the development of semantic composition use the methodology methontology proposed by the Universidad Politécnica de Madrid, because it allows the construction of ontologies about specific domains, limiting the domain by defining classes and subclasses, relationships and the generation of instances a framework for resource description on web ontology language.

  5. A Case for Embedded Natural Logic for Ontological Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Nilsson, Jørgen Fischer

    2014-01-01

    We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences...... negation in description logic. We embed the natural logic in DATALOG clauses which is to take care of the computational inference in connection with querying...

  6. An Ontology as a Tool for Representing Fuzzy Data in Relational Databases

    Directory of Open Access Journals (Sweden)

    Carmen Martinez-Cruz

    2012-11-01

    Full Text Available Several applications to represent classical or fuzzy data in databases have been developed in the last two decades. However, these representations present some limitations specially related with the system portability and complexity. Ontologies provides a mechanism to represent data in an implementation-independent and web-accessible way. To get advantage of this, in this paper, an ontology, that represents fuzzy relational database model, has been redefined to communicate users or applications with fuzzy data stored in fuzzy databases. The communication channel established between the ontology and any Relational Database Management System (RDBMS is analysed in depth throughout the text to justify some of the advantages of the system: expressiveness, portability and platform heterogeneity. Moreover, some tools have been developed to define and manage fuzzy and classical data in relational databases using this ontology. Even an application that performs fuzzy queries using the same technology is included in this proposal together with some examples using real databases.

  7. Business Ontology for Evaluating Corporate Social Responsibility

    OpenAIRE

    Ion Smeureanu; Andreea Dioşteanu; Camelia Delcea; Liviu Cotfas

    2011-01-01

    This paper presents a software solution that is developed to automatically classify companies by taking into account their level of social responsibility. The application is based on ontologies and on intelligent agents. In order to obtain the data needed to evaluate companies, we developed a web crawling module that analyzes the company’s website and the documents that are available online such as social responsibility report, mission statement, employment structure, etc. Based on a predefin...

  8. Quantum ontologies

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1988-12-01

    Quantum ontologies are conceptions of the constitution of the universe that are compatible with quantum theory. The ontological orientation is contrasted to the pragmatic orientation of science, and reasons are given for considering quantum ontologies both within science, and in broader contexts. The principal quantum ontologies are described and evaluated. Invited paper at conference: Bell's Theorem, Quantum Theory, and Conceptions of the Universe, George Mason University, October 20-21, 1988. 16 refs

  9. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  10. Comparison of reasoners for large ontologies in the OWL 2 EL profile

    NARCIS (Netherlands)

    Dentler, K.; Cornet, R.; ten Teije, A.C.M.; de Keizer, N.F.

    2011-01-01

    This paper provides a survey to and a comparison of state-of-the-art Semantic Web reasoners that succeed in classifying large ontologies expressed in the tractable OWL 2 EL profile. Reasoners are characterized along several dimensions: The first dimension comprises underlying reasoning

  11. There is no quantum ontology without classical ontology

    Energy Technology Data Exchange (ETDEWEB)

    Fink, Helmut [Institut fuer Theoretische Physik, Univ. Erlangen-Nuernberg (Germany)

    2011-07-01

    The relation between quantum physics and classical physics is still under debate. In his recent book ''Rational Reconstructions of Modern Physics'', Peter Mittelstaedt explores a route from classical to quantum mechanics by reduction and elimination of (some of) the ontological hypotheses underlying classical mechanics. While, according to Mittelstaedt, classical mechanics describes a fictitious world that does not exist in reality, he claims to achieve a universal quantum ontology that can be improved by incorporating unsharp properties and equipped with Planck's constant without any need to refer to classical concepts. In this talk, we argue that quantum ontology in Mittelstaedt's sense is not enough. Quantum ontology can never be universal as long as the difference between potential and real properties is not represented adequately. Quantum properties are potential, not (yet) real, be they sharp or unsharp. Hence, preparation and measurement presuppose classical concepts, even in quantum theory. We end up with a classical-quantum sandwich ontology, which is still less extravagant than Bohmian or many-worlds ontologies are.

  12. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    Science.gov (United States)

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  13. Effects of Web-Mediated Teacher Professional Development on the Language and Literacy Skills of Children Enrolled in Pre-Kindergarten Programs

    Science.gov (United States)

    Downer, Jason; Pianta, Robert; Fan, Xitao; Hamre, Bridget; Mashburn, Andrew; Justice, Laura

    2012-01-01

    As early education grows in the United States, in-service professional development in key instructional and interaction skills is a core component of capacity-building in early childhood education. In this paper, we describe results from an evaluation of the effects of MyTeachingPartner, a web-based system of professional development, on language and literacy development during pre-kindergarten for 1338 children in 161 teachers’ classrooms. High levels of support for teachers’ implementation of language/literacy activities showed modest but significant effects for improving early language and literacy for children in classrooms in which English was the dominant language spoken by the students and teachers. The combination of web-based supports, including video-based consultation and web-based video teaching exemplars, was more effective at improving children’s literacy and language skills than was only making available to teachers a set of instructional materials and detailed lesson guides. These results suggest the importance of targeted, practice-focused supports for teachers in designing professional development systems for effective teaching in early childhood programs. PMID:23144591

  14. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    Science.gov (United States)

    Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter

    2014-05-01

    This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.

  15. A Multivariate Analysis of Secondary Students' Experience of Web-Based Language Acquisition

    Science.gov (United States)

    Felix, Uschi

    2004-01-01

    This paper reports on a large-scale project designed to replicate an earlier investigation of tertiary students (Felix, 2001) in a secondary school environment. The new project was carried out in five settings, again investigating the potential of the Web as a medium of language instruction. Data was collected by questionnaires and observational…

  16. Discover, Reuse and Share Knowledge on Service Oriented Architectures

    Directory of Open Access Journals (Sweden)

    Jesus Soto Carrion

    2011-12-01

    Full Text Available Current Semantic Web frameworks provide a complete infrastructure to manage ontologies schemes easing information retrieval with inference support. Ideally, the use of their frameworks should be transparent and decoupled, avoiding direct dependencies either on the application logic or on the ontology language. Besides there are different logic models used by ontology languages (OWL- Description Logic, OpenCyc-FOL,... and query languages (RDQL, SPARQL, OWLQL, nRQL, etc... These facts show integration and interoperability tasks between ontologies and applications are tedious on currently systems. This research provides a general ESB service engine design based on JBI that enables ontology query and reasoning capabilities thought an Enterprise Service Bus. An early prototype that shows how works our research ideas has been developed.

  17. Feature-based Ontology Mapping from an Information Receivers’ Viewpoint

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Mørup, Morten

    2012-01-01

    This paper compares four algorithms for computing feature-based similarities between concepts respectively possessing a distinctive set of features. The eventual purpose of comparing these feature-based similarity algorithms is to identify a candidate term in a Target Language (TL) that can...... optimally convey the original meaning of a culturally-specific Source Language (SL) concept to a TL audience by aligning two culturally-dependent domain-specific ontologies. The results indicate that the Bayesian Model of Generalization [1] performs best, not only for identifying candidate translation terms...

  18. Constructing Ontology for Knowledge Sharing of Materials Failure Analysis

    Directory of Open Access Journals (Sweden)

    Peng Shi

    2014-01-01

    Full Text Available Materials failure indicates the fault with materials or components during their performance. To avoid the reoccurrence of similar failures, materials failure analysis is executed to investigate the reasons for the failure and to propose improved strategies. The whole procedure needs sufficient domain knowledge and also produces valuable new knowledge. However, the information about the materials failure analysis is usually retained by the domain expert, and its sharing is technically difficult. This phenomenon may seriously reduce the efficiency and decrease the veracity of the failure analysis. To solve this problem, this paper adopts ontology, a novel technology from the Semantic Web, as a tool for knowledge representation and sharing and describes the construction of the ontology to obtain information concerning the failure analysis, application area, materials, and failure cases. The ontology represented information is machine-understandable and can be easily shared through the Internet. At the same time, failure case intelligent retrieval, advanced statistics, and even automatic reasoning can be accomplished based on ontology represented knowledge. Obviously this can promote the knowledge sharing of materials service safety and improve the efficiency of failure analysis. The case of a nuclear power plant area is presented to show the details and benefits of this method.

  19. An ontological approach to identifying cases of chronic kidney disease from routine primary care data: a cross-sectional study.

    Science.gov (United States)

    Cole, Nicholas I; Liyanage, Harshana; Suckling, Rebecca J; Swift, Pauline A; Gallagher, Hugh; Byford, Rachel; Williams, John; Kumar, Shankar; de Lusignan, Simon

    2018-04-10

    Accurately identifying cases of chronic kidney disease (CKD) from primary care data facilitates the management of patients, and is vital for surveillance and research purposes. Ontologies provide a systematic and transparent basis for clinical case definition and can be used to identify clinical codes relevant to all aspects of CKD care and its diagnosis. We used routinely collected primary care data from the Royal College of General Practitioners Research and Surveillance Centre. A domain ontology was created and presented in Ontology Web Language (OWL). The identification and staging of CKD was then carried out using two parallel approaches: (1) clinical coding consistent with a diagnosis of CKD; (2) laboratory-confirmed CKD, based on estimated glomerular filtration rate (eGFR) or the presence of proteinuria. The study cohort comprised of 1.2 million individuals aged 18 years and over. 78,153 (6.4%) of the population had CKD on the basis of an eGFR of < 60 mL/min/1.73m 2 , and a further 7366 (0.6%) individuals were identified as having CKD due to proteinuria. 19,504 (1.6%) individuals without laboratory-confirmed CKD had a clinical code consistent with the diagnosis. In addition, a subset of codes allowed for 1348 (0.1%) individuals receiving renal replacement therapy to be identified. Finding cases of CKD from primary care data using an ontological approach may have greater sensitivity than less comprehensive methods, particularly for identifying those receiving renal replacement therapy or with CKD stages 1 or 2. However, the possibility of inaccurate coding may limit the specificity of this method.

  20. The Proteasix Ontology.

    Science.gov (United States)

    Arguello Casteleiro, Mercedes; Klein, Julie; Stevens, Robert

    2016-06-04

    The Proteasix Ontology (PxO) is an ontology that supports the Proteasix tool; an open-source peptide-centric tool that can be used to predict automatically and in a large-scale fashion in silico the proteases involved in the generation of proteolytic cleavage fragments (peptides) The PxO re-uses parts of the Protein Ontology, the three Gene Ontology sub-ontologies, the Chemical Entities of Biological Interest Ontology, the Sequence Ontology and bespoke extensions to the PxO in support of a series of roles: 1. To describe the known proteases and their target cleaveage sites. 2. To enable the description of proteolytic cleaveage fragments as the outputs of observed and predicted proteolysis. 3. To use knowledge about the function, species and cellular location of a protease and protein substrate to support the prioritisation of proteases in observed and predicted proteolysis. The PxO is designed to describe the biological underpinnings of the generation of peptides. The peptide-centric PxO seeks to support the Proteasix tool by separating domain knowledge from the operational knowledge used in protease prediction by Proteasix and to support the confirmation of its analyses and results. The Proteasix Ontology may be found at: http://bioportal.bioontology.org/ontologies/PXO . This ontology is free and open for use by everyone.

  1. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    Science.gov (United States)

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  2. Ontology mapping specification in description logics for cooperative ...

    African Journals Online (AJOL)

    Le développement rapide du Web sémantique est lié à la spécification de plus en plus d'ontologies. Celles-ci permettent de modéliser des connaissances agréées par des communautés de personnes concernant des domaines ou des tâches spécifiques. Le même domaine décrit par deux communautés distinctes sera ...

  3. A general lexicographic model for a typological variety of ...

    African Journals Online (AJOL)

    eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) ...

  4. Decreasing Cognitive Load for Learners: Strategy of Web-Based Foreign Language Learning

    Science.gov (United States)

    Zhang, Jianfeng

    2013-01-01

    Cognitive load is one of the important factors that influence the effectiveness and efficiency of web-based foreign language learning. Cognitive load theory assumes that human's cognitive capacity in working memory is limited and if it overloads, learning will be hampered, so that high level of cognitive load can affect the performance of learning…

  5. The Gene Ontology (GO) Cellular Component Ontology: integration with SAO (Subcellular Anatomy Ontology) and other recent developments

    Science.gov (United States)

    2013-01-01

    Background The Gene Ontology (GO) (http://www.geneontology.org/) contains a set of terms for describing the activity and actions of gene products across all kingdoms of life. Each of these activities is executed in a location within a cell or in the vicinity of a cell. In order to capture this context, the GO includes a sub-ontology called the Cellular Component (CC) ontology (GO-CCO). The primary use of this ontology is for GO annotation, but it has also been used for phenotype annotation, and for the annotation of images. Another ontology with similar scope to the GO-CCO is the Subcellular Anatomy Ontology (SAO), part of the Neuroscience Information Framework Standard (NIFSTD) suite of ontologies. The SAO also covers cell components, but in the domain of neuroscience. Description Recently, the GO-CCO was enriched in content and links to the Biological Process and Molecular Function branches of GO as well as to other ontologies. This was achieved in several ways. We carried out an amalgamation of SAO terms with GO-CCO ones; as a result, nearly 100 new neuroscience-related terms were added to the GO. The GO-CCO also contains relationships to GO Biological Process and Molecular Function terms, as well as connecting to external ontologies such as the Cell Ontology (CL). Terms representing protein complexes in the Protein Ontology (PRO) reference GO-CCO terms for their species-generic counterparts. GO-CCO terms can also be used to search a variety of databases. Conclusions In this publication we provide an overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information. One of the most recent developments of the GO-CCO was the merging in of the SAO, resulting in a single unified ontology designed to serve the needs of GO annotators as well as the specific needs of the neuroscience community. PMID:24093723

  6. Epistemology and ontology in core ontologies: FOLaw and LRI-Core, two core ontologies for law

    NARCIS (Netherlands)

    Breukers, J.A.P.J.; Hoekstra, R.J.

    2004-01-01

    For more than a decade constructing ontologies for legal domains, we, at the Leibniz Center for Law, felt really the need to develop a core ontology for law that would enable us to re-use the common denominator of the various legal domains. In this paper we present two core ontologies for law. The

  7. Ontology-based specification, identification and analysis of perioperative risks.

    Science.gov (United States)

    Uciteli, Alexandr; Neumann, Juliane; Tahar, Kais; Saleh, Kutaiba; Stucke, Stephan; Faulbrück-Röhr, Sebastian; Kaeding, André; Specht, Martin; Schmidt, Tobias; Neumuth, Thomas; Besting, Andreas; Stegemann, Dominik; Portheine, Frank; Herre, Heinrich

    2017-09-06

    Medical personnel in hospitals often works under great physical and mental strain. In medical decision-making, errors can never be completely ruled out. Several studies have shown that between 50 and 60% of adverse events could have been avoided through better organization, more attention or more effective security procedures. Critical situations especially arise during interdisciplinary collaboration and the use of complex medical technology, for example during surgical interventions and in perioperative settings (the period of time before, during and after surgical intervention). In this paper, we present an ontology and an ontology-based software system, which can identify risks across medical processes and supports the avoidance of errors in particular in the perioperative setting. We developed a practicable definition of the risk notion, which is easily understandable by the medical staff and is usable for the software tools. Based on this definition, we developed a Risk Identification Ontology (RIO) and used it for the specification and the identification of perioperative risks. An agent system was developed, which gathers risk-relevant data during the whole perioperative treatment process from various sources and provides it for risk identification and analysis in a centralized fashion. The results of such an analysis are provided to the medical personnel in form of context-sensitive hints and alerts. For the identification of the ontologically specified risks, we developed an ontology-based software module, called Ontology-based Risk Detector (OntoRiDe). About 20 risks relating to cochlear implantation (CI) have already been implemented. Comprehensive testing has indicated the correctness of the data acquisition, risk identification and analysis components, as well as the web-based visualization of results.

  8. Logic and Ontology

    Directory of Open Access Journals (Sweden)

    Newton C. A. da Costa

    2002-12-01

    Full Text Available In view of the present state of development of non classical logic, especially of paraconsistent logic, a new stand regarding the relations between logic and ontology is defended In a parody of a dictum of Quine, my stand May be summarized as follows. To be is to be the value of a variable a specific language with a given underlying logic Yet my stand differs from Quine’s, because, among other reasons, I accept some first order heterodox logics as genuine alternatives to classical logic I also discuss some questions of non classical logic to substantiate my argument, and suggest that may position complements and extends some ideas advanced by L Apostel.

  9. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    Science.gov (United States)

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  10. SPONGY (SPam ONtoloGY: Email Classification Using Two-Level Dynamic Ontology

    Directory of Open Access Journals (Sweden)

    Seongwook Youn

    2014-01-01

    Full Text Available Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user’s background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1 to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2 to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  11. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    Science.gov (United States)

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  12. Annotating the human genome with Disease Ontology

    Science.gov (United States)

    Osborne, John D; Flatow, Jared; Holko, Michelle; Lin, Simon M; Kibbe, Warren A; Zhu, Lihua (Julie); Danila, Maria I; Feng, Gang; Chisholm, Rex L

    2009-01-01

    Background The human genome has been extensively annotated with Gene Ontology for biological functions, but minimally computationally annotated for diseases. Results We used the Unified Medical Language System (UMLS) MetaMap Transfer tool (MMTx) to discover gene-disease relationships from the GeneRIF database. We utilized a comprehensive subset of UMLS, which is disease-focused and structured as a directed acyclic graph (the Disease Ontology), to filter and interpret results from MMTx. The results were validated against the Homayouni gene collection using recall and precision measurements. We compared our results with the widely used Online Mendelian Inheritance in Man (OMIM) annotations. Conclusion The validation data set suggests a 91% recall rate and 97% precision rate of disease annotation using GeneRIF, in contrast with a 22% recall and 98% precision using OMIM. Our thesaurus-based approach allows for comparisons to be made between disease containing databases and allows for increased accuracy in disease identification through synonym matching. The much higher recall rate of our approach demonstrates that annotating human genome with Disease Ontology and GeneRIF for diseases dramatically increases the coverage of the disease annotation of human genome. PMID:19594883

  13. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability.

    Science.gov (United States)

    Diehl, Alexander D; Meehan, Terrence F; Bradford, Yvonne M; Brush, Matthew H; Dahdul, Wasila M; Dougall, David S; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E; Vasilevsky, Nicole A; Haendel, Melissa A; Blake, Judith A; Mungall, Christopher J

    2016-07-04

    The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the

  14. Representing virus-host interactions and other multi-organism processes in the Gene Ontology.

    Science.gov (United States)

    Foulger, R E; Osumi-Sutherland, D; McIntosh, B K; Hulo, C; Masson, P; Poux, S; Le Mercier, P; Lomax, J

    2015-07-28

    The Gene Ontology project is a collaborative effort to provide descriptions of gene products in a consistent and computable language, and in a species-independent manner. The Gene Ontology is designed to be applicable to all organisms but up to now has been largely under-utilized for prokaryotes and viruses, in part because of a lack of appropriate ontology terms. To address this issue, we have developed a set of Gene Ontology classes that are applicable to microbes and their hosts, improving both coverage and quality in this area of the Gene Ontology. Describing microbial and viral gene products brings with it the additional challenge of capturing both the host and the microbe. Recognising this, we have worked closely with annotation groups to test and optimize the GO classes, and we describe here a set of annotation guidelines that allow the controlled description of two interacting organisms. Building on the microbial resources already in existence such as ViralZone, UniProtKB keywords and MeGO, this project provides an integrated ontology to describe interactions between microbial species and their hosts, with mappings to the external resources above. Housing this information within the freely-accessible Gene Ontology project allows the classes and annotation structure to be utilized by a large community of biologists and users.

  15. An Ontology for State Analysis: Formalizing the Mapping to SysML

    Science.gov (United States)

    Wagner, David A.; Bennett, Matthew B.; Karban, Robert; Rouquette, Nicolas; Jenkins, Steven; Ingham, Michel

    2012-01-01

    State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.

  16. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    Science.gov (United States)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  17. Theorizing and Studying the Language-Teaching Mind: Mapping Research on Language Teacher Cognition

    Science.gov (United States)

    Burns, Anne; Freeman, Donald; Edwards, Emily

    2015-01-01

    The overarching project of the conceptual and empirical contributions in this special issue is to redraw boundaries for language teacher cognition research. Our aim in this final article is to complement the foregoing collection of articles by conceptualizing ontologically and methodologically past and current trajectories in language teacher…

  18. Managing Requirement Volatility in an Ontology-Driven Clinical LIMS Using Category Theory

    Directory of Open Access Journals (Sweden)

    Arash Shaban-Nejad

    2009-01-01

    Full Text Available Requirement volatility is an issue in software engineering in general, and in Web-based clinical applications in particular, which often originates from an incomplete knowledge of the domain of interest. With advances in the health science, many features and functionalities need to be added to, or removed from, existing software applications in the biomedical domain. At the same time, the increasing complexity of biomedical systems makes them more difficult to understand, and consequently it is more difficult to define their requirements, which contributes considerably to their volatility. In this paper, we present a novel agent-based approach for analyzing and managing volatile and dynamic requirements in an ontology-driven laboratory information management system (LIMS designed for Web-based case reporting in medical mycology. The proposed framework is empowered with ontologies and formalized using category theory to provide a deep and common understanding of the functional and nonfunctional requirement hierarchies and their interrelations, and to trace the effects of a change on the conceptual framework.

  19. Alignment of ICNP® 2.0 ontology and a proposed INCP® Brazilian ontology.

    Science.gov (United States)

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; Nóbrega, Maria Miriam Lima da

    2014-01-01

    to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology. document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence, their names, and their sibling, parent, and child classes. there were 2,682 concepts present in the ICNP® 2.0 Ontology that were missing in the Brazilian Ontology; 717 concepts present in the Brazilian Ontology were missing in the ICNP® 2.0 Ontology; and there were 215 pairs of matching concepts. it is believed that the correspondences identified in this study might contribute to the interoperability between the representations of nursing practice elements in ICNP®, thus allowing the standardization of nursing records based on this classification system.

  20. BioAssay templates for the semantic web

    Directory of Open Access Journals (Sweden)

    Alex M. Clark

    2016-05-01

    Full Text Available Annotation of bioassay protocols using semantic web vocabulary is a way to make experiment descriptions machine-readable. Protocols are communicated using concise scientific English, which precludes most kinds of analysis by software algorithms. Given the availability of a sufficiently expressive ontology, some or all of the pertinent information can be captured by asserting a series of facts, expressed as semantic web triples (subject, predicate, object. With appropriate annotation, assays can be searched, clustered, tagged and evaluated in a multitude of ways, analogous to other segments of drug discovery informatics. The BioAssay Ontology (BAO has been previously designed for this express purpose, and provides a layered hierarchy of meaningful terms which can be linked to. Currently the biggest challenge is the issue of content creation: scientists cannot be expected to use the BAO effectively without having access to software tools that make it straightforward to use the vocabulary in a canonical way. We have sought to remove this barrier by: (1 defining a BioAssay Template (BAT data model; (2 creating a software tool for experts to create or modify templates to suit their needs; and (3 designing a common assay template (CAT to leverage the most value from the BAO terms. The CAT was carefully assembled by biologists in order to find a balance between the maximum amount of information captured vs. low degrees of freedom in order to keep the user experience as simple as possible. The data format that we use for describing templates and corresponding annotations is the native format of the semantic web (RDF triples, and we demonstrate some of the ways that generated content can be meaningfully queried using the SPARQL language. We have made all of these materials available as open source (http://github.com/cdd/bioassay-template, in order to encourage community input and use within diverse projects, including but not limited to our own

  1. Supporting the Bronze CastingThrough Information Structuring Based on Ontology application

    Directory of Open Access Journals (Sweden)

    Górny Z.

    2014-03-01

    Full Text Available A significant part of the knowledge used in the production processes is represented with natural language. Yet, the use of that knowledge in computer-assisted decision-making requires the application of appropriate formal and development tools. An interesting possibility is created by the use of an ontology that is understandable both for humans and for the computer. This paper presents a proposal for structuring the information about the foundry processes, based on the definition of ontology adapted to the physical structure of the ongoing technological operations that make up the process of producing castings.

  2. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    Science.gov (United States)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  3. Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing

    Science.gov (United States)

    Sermet, M. Y.; Demir, I.; Krajewski, W. F.

    2015-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans

  4. Modular Knowledge Representation and Reasoning in the Semantic Web

    Science.gov (United States)

    Serafini, Luciano; Homola, Martin

    Construction of modular ontologies by combining different modules is becoming a necessity in ontology engineering in order to cope with the increasing complexity of the ontologies and the domains they represent. The modular ontology approach takes inspiration from software engineering, where modularization is a widely acknowledged feature. Distributed reasoning is the other side of the coin of modular ontologies: given an ontology comprising of a set of modules, it is desired to perform reasoning by combination of multiple reasoning processes performed locally on each of the modules. In the last ten years, a number of approaches for combining logics has been developed in order to formalize modular ontologies. In this chapter, we survey and compare the main formalisms for modular ontologies and distributed reasoning in the Semantic Web. We select four formalisms build on formal logical grounds of Description Logics: Distributed Description Logics, ℰ-connections, Package-based Description Logics and Integrated Distributed Description Logics. We concentrate on expressivity and distinctive modeling features of each framework. We also discuss reasoning capabilities of each framework.

  5. Axiomatic Ontology Learning Approaches for English Translation of the Meaning of Quranic Texts

    Directory of Open Access Journals (Sweden)

    Saad Saidah

    2017-01-01

    Full Text Available Ontology learning (OL is the computational task of generating a knowledge base in the form of an ontology, given an unstructured corpus in natural language (NL. While most works in the field of ontology learning have been primarily based on a statistical approach to extract lightweight OL, very few attempts have been made to extract axiomatic OL (called heavyweight OL from NL text documents. Axiomatic OL supports more precise formal logic-based reasoning when compared to lightweight OL. Lexico-syntactic pattern matching and statisticsal one cannot lead to very accurate learning, mostly because of several linguistic nuances in the NL. Axiomatic OL is an alternative methodology that has not been explored much, where a deep linguistics analysis in computational linguistics is used to generate formal axioms and definitions instead of simply inducing a taxonomy. The ontology that is created not only stores the information about the application domain in explicit knowledge, but also can deduce the implicit knowledge from this ontology. This research will explore the English translation of the meaning of Quranic texts.

  6. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  7. Challenges to web-based learning in pharmacy education in Arabic language speaking countries

    Directory of Open Access Journals (Sweden)

    Ramez M Alkoudmani

    2015-01-01

    Full Text Available Web-based learning and web 2.0 tools which include new online educational technologies (EdTech and social media websites like Facebook® are playing crucial roles nowadays in pharmacy and medical education among millennial learners. Podcasting, webinars, and online learning management systems like Moodle® and other web 2.0 tools have been used in pharmacy and medical education to interactively share knowledge with peers and students. Learners can use laptops, iPads, iPhones, or tablet devices with a stable and good Internet connection to enroll in many online courses. Implementation of novel online EdTech in pharmacy and medical curricula has been noticed in developed countries such as European countries, the US, Canada, and Australia. However, these trends are scarce in the majority of Arabic language speaking countries (ALSC, where traditional and didactic educational methods are still being used with some exceptions seen in Palestine, Kuwait, Jordan, Saudi Arabia, Egypt, UAE, and Qatar. Although these new trends are promising to push pharmacy and medical education forward, major barriers regarding adaptation of E-learning and new online EdTech in Arab states have been reported such as higher connectivity costs, information communication technology (ICT problems, language barriers, wars and political conflicts, poor education, financial problems, and lack of qualified ICT-savvy educators. More research efforts are encouraged to study the effectiveness and proper use of web-based learning and emerging online EdTech in pharmacy education not only in ALSC but also in developing and developed countries.

  8. Web 2.0 in Computer-Assisted Language Learning: A Research Synthesis and Implications for Instructional Design and Educational Practice

    Science.gov (United States)

    Parmaxi, Antigoni; Zaphiris, Panayiotis

    2017-01-01

    This study explores the research development pertaining to the use of Web 2.0 technologies in the field of Computer-Assisted Language Learning (CALL). Published research manuscripts related to the use of Web 2.0 tools in CALL have been explored, and the following research foci have been determined: (1) Web 2.0 tools that dominate second/foreign…

  9. Building a Chemical Ontology using Methontology and the Ontology Design Environment

    OpenAIRE

    Fernández López, Mariano; Gómez-Pérez, A.; Pazos Sierra, Alejandro; Pazos Sierra, Juan

    1999-01-01

    METHONTOLOGY PROVIDES GUIDELINES FOR SPECIFYING ONTOLOGIES AT THE KNOWLEDGE LEVEL, AS A SPECIFICATION OF A CONCEPTUALIZATION. ODE ENABLES ONTOLOGY CONSTRUCTION, COVERING THE ENTIRE LIFE CYCLE AND AUTOMATICALLY IMPLEMENTING ONTOLOGIES

  10. Providing a New Model for Discovering Cloud Services Based on Ontology

    Directory of Open Access Journals (Sweden)

    B. Heydari

    2017-12-01

    Full Text Available Due to its efficient, flexible, and dynamic substructure in information technology and service quality parameters estimation, cloud computing has become one of the most important issues in computer world. Discovering cloud services has been posed as a fundamental issue in reaching out high efficiency. In order to do one’s own operations in cloud space, any user needs to request several various services either simultaneously or according to a working routine. These services can be presented by different cloud producers or different decision-making policies. Therefore, service management is one of the important and challenging issues in cloud computing. With the advent of semantic web and practical services accordingly in cloud computing space, access to different kinds of applications has become possible. Ontology is the core of semantic web and can be used to ease the process of discovering services. A new model based on ontology has been proposed in this paper. The results indicate that the proposed model has explored cloud services based on user search results in lesser time compared to other models.

  11. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  12. Our Policies, Their Text: German Language Students' Strategies with and Beliefs about Web-Based Machine Translation

    Science.gov (United States)

    White, Kelsey D.; Heidrich, Emily

    2013-01-01

    Most educators are aware that some students utilize web-based machine translators for foreign language assignments, however, little research has been done to determine how and why students utilize these programs, or what the implications are for language learning and teaching. In this mixed-methods study we utilized surveys, a translation task,…

  13. An Investigation into Chinese College English Teachers' Beliefs of Students' Web-Based Informal Language Learning

    Science.gov (United States)

    Jiang, Jiahong

    2016-01-01

    With the rapid development of information and technology, language learners have more ways to acquire the target language. Recently, WILL has gained popularity, for informal web-based learning of English has been depicted as a process driven by the purpose of communication. Thus, teachers have many challenges when teaching learners who have…

  14. Towards Process-Ontology: A Critical Study of Substance-Ontological Premises

    DEFF Research Database (Denmark)

    Seibt, Johanna

    The thesis proposes therapeutic revision of fundamental assumptions in contemporary ontological thought. I show that non of the prevalent theories of objects, by virtue of certain implicit substance-ontological assumptions provides a viable account of the numerical, qualitative, and trans-tempora......-ontological presuppositions, I finally explore the result of rejecting all of them and sketch a scheme basic on dynamic masses which promises to yield coherent explanation of the ontological features of those complex processes that we commonly call objects....

  15. The Effects of Web 2.0 Technologies Usage in Programming Languages Lesson on the Academic Success, Interrogative Learning Skills and Attitudes of Students towards Programming Languages

    Science.gov (United States)

    Gençtürk, Abdullah Tarik; Korucu, Agah Tugrul

    2017-01-01

    It is observed that teacher candidates receiving education in the department of Computer and Instructional Technologies Education are not able to gain enough experience and knowledge in "Programming Languages" lesson. The goal of this study is to analyse the effects of web 2.0 technologies usage in programming languages lesson on the…

  16. Ontology-Guided Image Interpretation for GEOBIA of High Spatial Resolution Remote Sense Imagery: A Coastal Area Case Study

    Directory of Open Access Journals (Sweden)

    Helingjie Huang

    2017-03-01

    Full Text Available Image interpretation is a major topic in the remote sensing community. With the increasing acquisition of high spatial resolution (HSR remotely sensed images, incorporating geographic object-based image analysis (GEOBIA is becoming an important sub-discipline for improving remote sensing applications. The idea of integrating the human ability to understand images inspires research related to introducing expert knowledge into image object–based interpretation. The relevant work involved three parts: (1 identification and formalization of domain knowledge; (2 image segmentation and feature extraction; and (3 matching image objects with geographic concepts. This paper presents a novel way that combines multi-scaled segmented image objects with geographic concepts to express context in an ontology-guided image interpretation. Spectral features and geometric features of a single object are extracted after segmentation and topological relationships are also used in the interpretation. Web ontology language–query language (OWL-QL formalize domain knowledge. Then the interpretation matching procedure is implemented by the OWL-QL query-answering. Compared with a supervised classification, which does not consider context, the proposed method validates two HSR images of coastal areas in China. Both the number of interpreted classes increased (19 classes over 10 classes in Case 1 and 12 classes over seven in Case 2, and the overall accuracy improved (0.77 over 0.55 in Case 1 and 0.86 over 0.65 in Case 2. The additional context of the image objects improved accuracy during image classification. The proposed approach shows the pivotal role of ontology for knowledge-guided interpretation.

  17. Gold-standard evaluation of a folksonomy-based ontology learning model

    Science.gov (United States)

    Djuana, E.

    2018-03-01

    Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.

  18. Enabling knowledge representation on the Web by extending RDF Schema

    NARCIS (Netherlands)

    Broekstra, Jeen; Klein, Michel; Decker, Stefan; Fensel, Dieter; Van Harmelen, Frank; Horrocks, Ian

    2002-01-01

    Recently, a widespread interest has emerged in using ontologies on the Web. Resource Description Framework Schema (RDFS) is a basic tool that enables users to define vocabulary, structure and constraints for expressing meta data about Web resources. However, it includes no provisions for formal

  19. SUGOI: automated ontology interchangeability

    CSIR Research Space (South Africa)

    Khan, ZC

    2015-04-01

    Full Text Available A foundational ontology can solve interoperability issues among the domain ontologies aligned to it. However, several foundational ontologies have been developed, hence such interoperability issues exist among domain ontologies. The novel SUGOI tool...

  20. Effects of Online Instructional Conversation on English as a Foreign Language Learners' WebQuest Writing Performance: A Mixed Methods Study

    Science.gov (United States)

    Lee, Haesong

    2013-01-01

    WebQuests, or inquiry-oriented activities in which learners interact with Web-based information (Dodge, 1995, 1996, 2007), have recently been gaining popularity in education in general and in language education in particular. While it has the advantage of fostering higher-level thinking through authentic assignments, a WebQuest can be challenging…

  1. Estudio tecnológico y diseño arquitectónico de un sistema de gestión de esquemas semánticos basados en ontologías

    OpenAIRE

    Fernández-Tostado Canorea, Tatiana

    2009-01-01

    Las ontologías son una parte fundamental de la web semántica al permitir relacionar la información de la web con su significado. Este proceso de cualificación semántica es necesario para lograr la recuperación semántica de información en la web. El proyecto SEMSE propone cualificar semánticamente esquemas de metadatos mediante un sistema informático que permita hacer uso de la semántica incluida en ontologías distribuidas vía web. El presente trabajo se centra, tras el análisis de la problemá...

  2. Concerning the Importance of Ontological Issues for Cultural Psychology: a Reply to Comments.

    Science.gov (United States)

    Mironenko, Irina A

    2017-09-01

    The paper continues the "ontological" discussion in IBPS, addressing the question of the importance of ontological issues for contemporary development of cultural psychology. The language psychological science speaks is considered as an ontological issue and a most topical one for cultural psychology, aiming at "constructing a psychology that is universal while being culture-inclusive" (Valsiner 2009, p.2). Ontological issues could stay implicit and neglected, as long as the 'etant, "the mode of being", "the particularities" were discussed within the circle of adherents of one and the same school, who implicitly had in mind the same 'entre. However, as soon as the discussion involves representatives of different schools, ontological issues become crucial for mutual understanding and meanings of the words have to be explicated. Same words like "psyche", "subjectivity", "social", "culture", etc., - often mean different things when they are pronounces or written by representatives of different theoretical trends. The discussion of the 'etant without clear indicating of the 'entre under consideration is likely to turn into a Babel. Global modernity requires constant efforts and insistent desire for mutual understanding across the diversified global scientific community. Thus, creative collaboration in epistemological developments has to ground on clear comprehension of the ontological stances of the debaters.

  3. CO2 and O2 solubility and diffusivity data in food products stored in data warehouse structured by ontology.

    Science.gov (United States)

    Guillard, Valérie; Buche, Patrice; Dibie, Juliette; Dervaux, Stéphane; Acerbi, Filippo; Chaix, Estelle; Gontard, Nathalie; Guillaume, Carole

    2016-06-01

    This data article contains values of oxygen and carbon dioxide solubility and diffusivity measured in various model and real food products. These data are stored in a public repository structured by ontology. These data can be retrieved through the @Web tool, a user-friendly interface to capitalise and query data. The @Web tool is accessible online at http://pfl.grignon.inra.fr/atWeb/.

  4. A Case for Embedded Natural Logic for Ontological Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Nilsson, Jørgen Fischer

    2014-01-01

    We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences......, where the large and evolving knowledge specifications should be directly accessible to domain experts. Moreover, natural logic comes with intuitive inference rules. The considered version of natural logic leans toward the closed world assumption (CWA) unlike the open world assumption with classical...

  5. Combining the Generic Entity-Attribute-Value Model and Terminological Models into a Common Ontology to Enable Data Integration and Decision Support.

    Science.gov (United States)

    Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte

    2018-01-01

    The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.

  6. Information System of Resolution of Procedural Incidents and Management of the Modifications Made to the Electronic Court Registration

    Directory of Open Access Journals (Sweden)

    Ştefan Gheorghe PENTIUC

    2011-01-01

    Full Text Available This information system was made for its use by the staff responsible for random distribution of cases to the courts. The Information System of Resolution of Procedural Incidents and Management of the Modifications Made to the Electronic Court Registration consists of three new developed modules: the management module is a Web application which chronicles the modifications made in the electronic court registration, regarding the random assignment of cases,the resolution of procedural incidents, which is a Web service whose logic implements a logic Semantic Web application and the module of confirming judges which is a windows service running on the judges’ workstations. The Web service implements a Semantic Web application which processes the knowledgebase achieved through OWL ontology (Ontology Web Language by applying inferences leading to the correct solution. If this does not solve the problem, a set of associated Jena rules are used to infer and generate new knowledge. It also uses the SPARQL(SPARQL Protocol and RDF Query Language language that allows queries on the knowledge,similar to the classic query languages of databases. The novelty of the new conceived, designed and implemented system consists in accessing the domain knowledge as a web service to solve the procedural incidents occurred in electronic court registration.

  7. A grammar-based semantic similarity algorithm for natural language sentences.

    Science.gov (United States)

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  8. Impact of a Web-Based Reading Program on Sixth-Grade English Language Learners

    Science.gov (United States)

    Wright, Rosena

    2010-01-01

    This applied dissertation was developed to determine (a) the impact that Achieve3000, a web-based reading program, had on the reading-comprehension skills of English language learners (ELLs) and (b) the perceptions of students and their teacher on the technology program used at the study school as it relates to the remediation of the reading…

  9. A multi-ontology approach to annotate scientific documents based on a modularization technique.

    Science.gov (United States)

    Gomes, Priscilla Corrêa E Castro; Moura, Ana Maria de Carvalho; Cavalcanti, Maria Cláudia

    2015-12-01

    Scientific text annotation has become an important task for biomedical scientists. Nowadays, there is an increasing need for the development of intelligent systems to support new scientific findings. Public databases available on the Web provide useful data, but much more useful information is only accessible in scientific texts. Text annotation may help as it relies on the use of ontologies to maintain annotations based on a uniform vocabulary. However, it is difficult to use an ontology, especially those that cover a large domain. In addition, since scientific texts explore multiple domains, which are covered by distinct ontologies, it becomes even more difficult to deal with such task. Moreover, there are dozens of ontologies in the biomedical area, and they are usually big in terms of the number of concepts. It is in this context that ontology modularization can be useful. This work presents an approach to annotate scientific documents using modules of different ontologies, which are built according to a module extraction technique. The main idea is to analyze a set of single-ontology annotations on a text to find out the user interests. Based on these annotations a set of modules are extracted from a set of distinct ontologies, and are made available for the user, for complementary annotation. The reduced size and focus of the extracted modules tend to facilitate the annotation task. An experiment was conducted to evaluate this approach, with the participation of a bioinformatician specialist of the Laboratory of Peptides and Proteins of the IOC/Fiocruz, who was interested in discovering new drug targets aiming at the combat of tropical diseases. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. APPLICATION OF COMPUTER SYSTEMS ONTOLOGY IN THE PROCESS OF FUTURE ENGINEER AND EDUCATOR’S PRACTICAL ACTIVITY

    Directory of Open Access Journals (Sweden)

    Сергій Козіброда

    2014-04-01

    Full Text Available This article develops the problem of the use of computer systems ontology in the professional activity of future engineers and teachers in the sphere of computer technology. The tasks of automated exchange of formal model descriptions as a main factor of a research performing in the sphere of ontology use have been grounded. The expediency of use of the ontology of computer systems in the following fields of intending engineers and teachers’ training: artificial intelligence, interface, natural language processing, question-answer systems, classification of goods and services semantic mark-up of text, modelling organizational structure of enterprises, systems of reference information (NSI.

  11. Ontology for assessment studies of human-computer-interaction in surgery.

    Science.gov (United States)

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks

  12. Alignment of ICNP? 2.0 Ontology and a proposed INCP? Brazilian Ontology1

    OpenAIRE

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; da N?brega, Maria Miriam Lima

    2014-01-01

    OBJECTIVE: to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology.METHOD: document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence...

  13. The Volcanism Ontology (VO): a model of the volcanic system

    Science.gov (United States)

    Myer, J.; Babaie, H. A.

    2017-12-01

    We have modeled a part of the complex material and process entities and properties of the volcanic system in the Volcanism Ontology (VO) applying several top-level ontologies such as Basic Formal Ontology (BFO), SWEET, and Ontology of Physics for Biology (OPB) within a single framework. The continuant concepts in BFO describe features with instances that persist as wholes through time and have qualities (attributes) that may change (e.g., state, composition, and location). In VO, the continuants include lava, volcanic rock, and volcano. The occurrent concepts in BFO include processes, their temporal boundaries, and the spatio-temporal regions within which they occur. In VO, these include eruption (process), the onset of pyroclastic flow (temporal boundary), and the space and time span of the crystallization of lava in a lava tube (spatio-temporal region). These processes can be of physical (e.g., debris flow, crystallization, injection), atmospheric (e.g., vapor emission, ash particles blocking solar radiation), hydrological (e.g., diffusion of water vapor, hot spring), thermal (e.g., cooling of lava) and other types. The properties (predicates) relate continuants to other continuants, occurrents to continuants, and occurrents to occurrents. The ontology also models other concepts such as laboratory and field procedures by volcanologists, sampling by sensors, and the type of instruments applied in monitoring volcanic activity. When deployed on the web, VO will be used to explicitly and formally annotate data and information collected by volcanologists based on domain knowledge. This will enable the integration of global volcanic data and improve the interoperability of software that deal with such data.

  14. Philosophical engineering toward a philosophy of the web

    CERN Document Server

    Halpin, Harry

    2013-01-01

    This is the first interdisciplinary exploration of the philosophical foundations of the Web, a new area of inquiry that has important implications across a range of domains. Contains twelve essays that bridge the fields of philosophy, cognitive science, and phenomenologyTackles questions such as the impact of Google on intelligence and epistemology, the philosophical status of digital objects, ethics on the Web, semantic and ontological changes caused by the Web, and the potential of the Web to serve as a genuine cognitive extensionBrings together insightful new scholarship from well-known an

  15. Knowledge retrieval from PubMed abstracts and electronic medical records with the Multiple Sclerosis Ontology.

    Science.gov (United States)

    Malhotra, Ashutosh; Gündel, Michaela; Rajput, Abdul Mateen; Mevissen, Heinz-Theodor; Saiz, Albert; Pastor, Xavier; Lozano-Rubi, Raimundo; Martinez-Lapiscina, Elena H; Martinez-Lapsicina, Elena H; Zubizarreta, Irati; Mueller, Bernd; Kotelnikova, Ekaterina; Toldo, Luca; Hofmann-Apitius, Martin; Villoslada, Pablo

    2015-01-01

    In order to retrieve useful information from scientific literature and electronic medical records (EMR) we developed an ontology specific for Multiple Sclerosis (MS). The MS Ontology was created using scientific literature and expert review under the Protégé OWL environment. We developed a dictionary with semantic synonyms and translations to different languages for mining EMR. The MS Ontology was integrated with other ontologies and dictionaries (diseases/comorbidities, gene/protein, pathways, drug) into the text-mining tool SCAIView. We analyzed the EMRs from 624 patients with MS using the MS ontology dictionary in order to identify drug usage and comorbidities in MS. Testing competency questions and functional evaluation using F statistics further validated the usefulness of MS ontology. Validation of the lexicalized ontology by means of named entity recognition-based methods showed an adequate performance (F score = 0.73). The MS Ontology retrieved 80% of the genes associated with MS from scientific abstracts and identified additional pathways targeted by approved disease-modifying drugs (e.g. apoptosis pathways associated with mitoxantrone, rituximab and fingolimod). The analysis of the EMR from patients with MS identified current usage of disease modifying drugs and symptomatic therapy as well as comorbidities, which are in agreement with recent reports. The MS Ontology provides a semantic framework that is able to automatically extract information from both scientific literature and EMR from patients with MS, revealing new pathogenesis insights as well as new clinical information.

  16. Text Mining to inform construction of Earth and Environmental Science Ontologies

    Science.gov (United States)

    Schildhauer, M.; Adams, B.; Rebich Hespanha, S.

    2013-12-01

    There is a clear need for better semantic representation of Earth and environmental concepts, to facilitate more effective discovery and re-use of information resources relevant to scientists doing integrative research. In order to develop general-purpose Earth and environmental science ontologies, however, it is necessary to represent concepts and relationships that span usage across multiple disciplines and scientific specialties. Traditional knowledge modeling through ontologies utilizes expert knowledge but inevitably favors the particular perspectives of the ontology engineers, as well as the domain experts who interacted with them. This often leads to ontologies that lack robust coverage of synonymy, while also missing important relationships among concepts that can be extremely useful for working scientists to be aware of. In this presentation we will discuss methods we have developed that utilize statistical topic modeling on a large corpus of Earth and environmental science articles, to expand coverage and disclose relationships among concepts in the Earth sciences. For our work we collected a corpus of over 121,000 abstracts from many of the top Earth and environmental science journals. We performed latent Dirichlet allocation topic modeling on this corpus to discover a set of latent topics, which consist of terms that commonly co-occur in abstracts. We match terms in the topics to concept labels in existing ontologies to reveal gaps, and we examine which terms are commonly associated in natural language discourse, to identify relationships that are important to formally model in ontologies. Our text mining methodology uncovers significant gaps in the content of some popular existing ontologies, and we show how, through a workflow involving human interpretation of topic models, we can bootstrap ontologies to have much better coverage and richer semantics. Because we base our methods directly on what working scientists are communicating about their

  17. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    Science.gov (United States)

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. The EMBRACE web service collection

    DEFF Research Database (Denmark)

    Pettifer, S.; Ison, J.; Kalas, M.

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order...... for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM......, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...

  19. An ontology approach to comparative phenomics in plants

    KAUST Repository

    Oellrich, Anika

    2015-02-25

    Background: Plant phenotype datasets include many different types of data, formats, and terms from specialized vocabularies. Because these datasets were designed for different audiences, they frequently contain language and details tailored to investigators with different research objectives and backgrounds. Although phenotype comparisons across datasets have long been possible on a small scale, comprehensive queries and analyses that span a broad set of reference species, research disciplines, and knowledge domains continue to be severely limited by the absence of a common semantic framework. Results: We developed a workflow to curate and standardize existing phenotype datasets for six plant species, encompassing both model species and crop plants with established genetic resources. Our effort focused on mutant phenotypes associated with genes of known sequence in Arabidopsis thaliana (L.) Heynh. (Arabidopsis), Zea mays L. subsp. mays (maize), Medicago truncatula Gaertn. (barrel medic or Medicago), Oryza sativa L. (rice), Glycine max (L.) Merr. (soybean), and Solanum lycopersicum L. (tomato). We applied the same ontologies, annotation standards, formats, and best practices across all six species, thereby ensuring that the shared dataset could be used for cross-species querying and semantic similarity analyses. Curated phenotypes were first converted into a common format using taxonomically broad ontologies such as the Plant Ontology, Gene Ontology, and Phenotype and Trait Ontology. We then compared ontology-based phenotypic descriptions with an existing classification system for plant phenotypes and evaluated our semantic similarity dataset for its ability to enhance predictions of gene families, protein functions, and shared metabolic pathways that underlie informative plant phenotypes. Conclusions: The use of ontologies, annotation standards, shared formats, and best practices for cross-taxon phenotype data analyses represents a novel approach to plant phenomics

  20. An ontology approach to comparative phenomics in plants

    KAUST Repository

    Oellrich, Anika; Walls, Ramona L; Cannon, Ethalinda KS; Cannon, Steven B; Cooper, Laurel; Gardiner, Jack; Gkoutos, Georgios V; Harper, Lisa; He, Mingze; Hoehndorf, Robert; Jaiswal, Pankaj; Kalberer, Scott R; Lloyd, John P; Meinke, David; Menda, Naama; Moore, Laura; Nelson, Rex T; Pujar, Anuradha; Lawrence, Carolyn J; Huala, Eva

    2015-01-01

    Background: Plant phenotype datasets include many different types of data, formats, and terms from specialized vocabularies. Because these datasets were designed for different audiences, they frequently contain language and details tailored to investigators with different research objectives and backgrounds. Although phenotype comparisons across datasets have long been possible on a small scale, comprehensive queries and analyses that span a broad set of reference species, research disciplines, and knowledge domains continue to be severely limited by the absence of a common semantic framework. Results: We developed a workflow to curate and standardize existing phenotype datasets for six plant species, encompassing both model species and crop plants with established genetic resources. Our effort focused on mutant phenotypes associated with genes of known sequence in Arabidopsis thaliana (L.) Heynh. (Arabidopsis), Zea mays L. subsp. mays (maize), Medicago truncatula Gaertn. (barrel medic or Medicago), Oryza sativa L. (rice), Glycine max (L.) Merr. (soybean), and Solanum lycopersicum L. (tomato). We applied the same ontologies, annotation standards, formats, and best practices across all six species, thereby ensuring that the shared dataset could be used for cross-species querying and semantic similarity analyses. Curated phenotypes were first converted into a common format using taxonomically broad ontologies such as the Plant Ontology, Gene Ontology, and Phenotype and Trait Ontology. We then compared ontology-based phenotypic descriptions with an existing classification system for plant phenotypes and evaluated our semantic similarity dataset for its ability to enhance predictions of gene families, protein functions, and shared metabolic pathways that underlie informative plant phenotypes. Conclusions: The use of ontologies, annotation standards, shared formats, and best practices for cross-taxon phenotype data analyses represents a novel approach to plant phenomics