Full Text Available E-business refers to the utilization of information and communication technologies (ICT in support of all the activities of business. The standards developed for e-business help to facilitate the deployment of e-business. In particular, several organizations in e-business sector have produced standards and representation forms using XML. It serves as an interchange format for exchanging data between communicating applications. However, XML says nothing about the semantics of the used tags. XML is merely a standard notation for markup languages, which provides a means for structuring documents. Therefore the XML-based e-business software is developed by hard-coding. Hard-coding is proven to be a valuable and powerful way for exchanging structured and persistent business documents. However, if we use hard-coding in the case of non-persistent documents and non-static environments we will encounter problems in deploying new document types as it requires a long lasting standardization process. Replacing existing hard-coded ebusiness systems by open systems that support semantic interoperability, and which are easily extensible, is the topic of this article. We first consider XML-based technologies and standards developed for B2B interoperation. Then, we consider electronic auctions, which represent a form of e-business. In particular, we represent how semantic interoperability can be achieved in electronic auctions.
Kilic, Ozgur; Dogac, Asuman
Effective use of electronic healthcare records (EHRs) has the potential to positively influence both the quality and the cost of health care. Consequently, sharing patient's EHRs is becoming a global priority in the healthcare information technology domain. This paper addresses the interoperability of EHR structure and content. It describes how two different EHR standards derived from the same reference information model (RIM) can be mapped to each other by using archetypes, refined message information model (R-MIM) derivations, and semantic tools. It is also demonstrated that well-defined R-MIM derivation rules help tracing the class properties back to their origins when the R-MIMs of two EHR standards are derived from the same RIM. Using well-defined rules also enable finding equivalences in the properties of the source and target EHRs. Yet an R-MIM still defines the concepts at the generic level. Archetypes (or templates), on the other hand, constrain an R-MIM to domain-specific concepts, and hence, provide finer granularity semantics. Therefore, while mapping clinical statements between EHRs, we also make use of the archetype semantics. Derivation statements are inferred from the Web Ontology Language definitions of the RIM, the R-MIMs, and the archetypes. Finally, we show how to transform Health Level Seven clinical statement instances to EHRcom clinical statement instances and vice versa by using the generated mapping definitions.
Félix Oscar Fernández Peña
Full Text Available Knowledge management systems support education at different levels of the education. This is very important for the process in which the higher education of Cuba is involved. Structural transformations of teaching are focused on supporting the foundation of the information society in the country. This paper describes technical aspects of the designing of a model for the integration of multiple knowledgemanagement tools supporting teaching. The proposal is based on the definition of an ontology for the explicit formal description of the semantic of motivations of students and teachers in the learning process. Its target is to facilitate knowledge spreading.
Marco-Ruiz, Luis; Bellika, Johan Gustav
The interoperability of Clinical Decision Support (CDS) systems with other health information systems has become one of the main limitations to their broad adoption. Semantic interoperability must be granted in order to share CDS modules across different health information systems. Currently, numerous standards for different purposes are available to enable the interoperability of CDS systems. We performed a literature review to identify and provide an overview of the available standards that enable CDS interoperability in the areas of clinical information, decision logic, terminology, and web service interfaces.
Folmer, E.J.A.; Verhoosel, J.P.C.
This book contains a broad overview of relevant studies in the area of semantic IS standards. It includes an introduction in the general topic of standardization and introduces the concept of interoperability. The primary focus is however on semantic IS standards, their characteristics, and the qual
Folmer, Erwin; Verhoosel, Jack
This book contains a broad overview of relevant studies in the area of semantic IS standards. It includes an introduction in the general topic of standardization and introduces the concept of interoperability. The primary focus is however on semantic IS standards, their characteristics, and the qual
Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.
Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will
Full Text Available Satellites and ocean based observing system consists of various sensors and configurations. These observing systems transmit data in heterogeneous file formats and heterogeneous vocabulary from various data centers. These data centers maintain a centralized data management system that disseminates the observations to various research communities. Currently, different data naming conventions are being used by existing observing systems, thus leading to semantic heterogeneity. In this work, sensor data interoperability and semantics of the data are being addressed through ontologies. The present work provides an effective technical solution to address semantic heterogeneity through semantic technologies. These technologies provide interoperability, capability to build knowledge base, and framework for semantic information retrieval by developing an effective concept vocabulary through domain ontologies. The paper aims at a new methodology to interlink the multidisciplinary and heterogeneous sensor data products. A four phase methodology has been implemented to address satellite data semantic interoperability. The paper concludes with the evaluation of the methodology by linking and interfacing multiple ontologies to arrive at ontology vocabulary for sensor observations. Data from Indian Meteorological satellite INSAT-3D satellite have been used as a typical example to illustrate the concepts. This work on similar lines can also be extended to other sensor observations.
Peer-to-peer systems are evolving with new information-system architectures, leading to the idea that the principles of decentralization and self-organization will offer new approaches in informatics, especially for systems that scale with the number of users or for which central authorities do not prevail. This book describes a new way of building global agreements (semantic interoperability) based only on decentralized, self-organizing interactions.
Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd
Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.
Williams, Antony J; Harland, Lee; Groth, Paul; Pettifer, Stephen; Chichester, Christine; Willighagen, Egon L; Evelo, Chris T; Blomberg, Niklas; Ecker, Gerhard; Goble, Carole; Mons, Barend
Open PHACTS is a public-private partnership between academia, publishers, small and medium sized enterprises and pharmaceutical companies. The goal of the project is to deliver and sustain an 'open pharmacological space' using and enhancing state-of-the-art semantic web standards and technologies. It is focused on practical and robust applications to solve specific questions in drug discovery research. OPS is intended to facilitate improvements in drug discovery in academia and industry and to support open innovation and in-house non-public drug discovery research. This paper lays out the challenges and how the Open PHACTS project is hoping to address these challenges technically and socially.
Khan, Wajahat Ali; Khattak, Asad Masood; Hussain, Maqbool; Amin, Muhammad Bilal; Afzal, Muhammad; Nugent, Christopher; Lee, Sungyoung
Heterogeneity in the management of the complex medical data, obstructs the attainment of data level interoperability among Health Information Systems (HIS). This diversity is dependent on the compliance of HISs with different healthcare standards. Its solution demands a mediation system for the accurate interpretation of data in different heterogeneous formats for achieving data interoperability. We propose an adaptive AdapteR Interoperability ENgine mediation system called ARIEN, that arbitrates between HISs compliant to different healthcare standards for accurate and seamless information exchange to achieve data interoperability. ARIEN stores the semantic mapping information between different standards in the Mediation Bridge Ontology (MBO) using ontology matching techniques. These mappings are provided by our System for Parallel Heterogeneity (SPHeRe) matching system and Personalized-Detailed Clinical Model (P-DCM) approach to guarantee accuracy of mappings. The realization of the effectiveness of the mappings stored in the MBO is evaluation of the accuracy in transformation process among different standard formats. We evaluated our proposed system with the transformation process of medical records between Clinical Document Architecture (CDA) and Virtual Medical Record (vMR) standards. The transformation process achieved over 90 % of accuracy level in conversion process between CDA and vMR standards using pattern oriented approach from the MBO. The proposed mediation system improves the overall communication process between HISs. It provides an accurate and seamless medical information exchange to ensure data interoperability and timely healthcare services to patients.
Full Text Available As databases become widely used, there is a growing need to translate information between multiple databases. Semantic interoperability and integration has been a long standing challenge for the database community and has now become a prominent area of database research. In this paper, we aim to answer the question how semantic interoperability between two databases can be achieved by using Formal Concept Analysis (FCA for short and Information Flow (IF for short theories. For our purposes, firstly we discover knowledge from different databases by using FCA, and then align what is discovered by using IF and FCA. The development of FCA has led to some software systems such as TOSCANA and TUPLEWARE, which can be used as a tool for discovering knowledge in databases. A prototype based on the IF and FCA has been developed. Our method is tested and verified by using this prototype and TUPLEWARE.
Liyanage, Harshana; Krause, Paul; De Lusignan, Simon
The present-day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual person-based records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies.
Gröger, Gerhard; Plümer, Lutz
CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its
Desourdis, Robert I
Supported by over 90 illustrations, this unique book provides a detailed examination of the subject, focusing on the use of voice, data, and video systems for public safety and emergency response. This practical resource makes in-depth recommendations spanning technical, planning, and procedural approaches to provide efficient public safety response performance. You find covered the many approaches used to achieve interoperability, including a synopsis of the enabling technologies and systems intended to provide radio interoperability. Featuring specific examples nationwide, the book takes you
Stocks, K. I.; Chen, Y.; Shepherd, A.; Chandler, C. L.; Dockery, N.; Elya, J. L.; Smith, S. R.; Ferreira, R.; Fu, L.; Arko, R. A.
With informatics providing an increasingly important set of tools for geoscientists, it is critical to train the next generation of scientists in information and data techniques. The NSF-supported Rolling Deck to Repository (R2R) Program works with the academic fleet community to routinely document, assess, and preserve the underway sensor data from U.S. research vessels. The Ocean Data Interoperability Platform (ODIP) is an EU-US-Australian collaboration fostering interoperability among regional e-infrastructures through workshops and joint prototype development. The need to align terminology between systems is a common challenge across all of the ODIP prototypes. Five R2R students were supported to address aspects of semantic interoperability within ODIP. Developing a vocabulary matching service that links terms from different vocabularies with similar concept. The service implements Google Refine reconciliation service interface such that users can leverage Google Refine application as a friendly user interface while linking different vocabulary terms. Developing Resource Description Framework (RDF) resources that map Shipboard Automated Meteorological Oceanographic System (SAMOS) vocabularies to internationally served vocabularies. Each SAMOS vocabulary term (data parameter and quality control flag) will be described as an RDF resource page. These RDF resources allow for enhanced discoverability and retrieval of SAMOS data by enabling data searches based on parameter. Improving data retrieval and interoperability by exposing data and mapped vocabularies using Semantic Web technologies. We have collaborated with ODIP participating organizations in order to build a generalized data model that will be used to populate a SPARQL endpoint in order to provide expressive querying over our data files. Mapping local and regional vocabularies used by R2R to those used by ODIP partners. This work is described more fully in a companion poster. Making published Linked Data
Bui, V.T.; Brandt, P.; Liu, H.; Basten, T.; Lukkien, J.
Crucial to the success of Body Area Sensor Networks is the flexibility with which stakeholders can share, extend and adapt the system with respect to sensors, data and functionality. The first step is to develop an interoperable platform with explicit interfaces, which takes care of common managemen
Earth and environmental scientists are familiar with the entities, processes, and theories germane to their field of study, and comfortable collecting and analyzing data in their area of interest. Yet, while there appears to be consistency and agreement as to the scientific "terms" used to describe features in their data and analyses, aside from a few fundamental physical characteristics—such as mass or velocity-- there can be broad tolerances, if not considerable ambiguity, in how many earth science "terms" map to the underlying "concepts" that they actually represent. This ambiguity in meanings, or "semantics", creates major problems for scientific reproducibility. It greatly impedes the ability to replicate results—by making it difficult to determine the specifics of the intended meanings of terms such as deforestation or carbon flux -- as to scope, composition, magnitude, etc. In addition, semantic ambiguity complicates assemblage of comparable data for reproducing results, due to ambiguous or idiosyncratic labels for measurements, such as percent cover of forest, where the term "forest" is undefined; or where a reported output of "total carbon-emissions" might just include CO2 emissions, but not methane emissions. In this talk, we describe how the NSF-funded DataONE repository for earth and environmental science data (http://dataone.org), is using W3C-standard languages (RDF/OWL) to build an ontology for clarifying concepts embodied in heterogeneous data and model outputs. With an initial focus on carbon cycling concepts using terrestrial biospheric model outputs and LTER productivity data, we describe how we are achieving interoperability with "semantic vocabularies" (or ontologies) from aligned earth and life science domains, including OBO-foundry ontologies such as ENVO and BCO; the ISO/OGC O&M; and the NSF Earthcube GeoLink project. Our talk will also discuss best practices that may be helpful for other groups interested in constructing their own
KOSOVO ARMED FORCES DEVELOPMENT; ACHIEVING NATO NON-ARTICLE 5 CRISIS RESPONSE OPERATIONS INTEROPERABILITY A thesis presented...Achieving NATO Non- Article 5 Crisis Response Operations Interoperability 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...authorities, and participation in international operations. One of objective of Kosovo Armed Forces is to be fully interoperable with NATO members, in
Full Text Available The new Swedish Patient Act, which allows patients to choose health care in county councils other than their own, creates the need to be able to share health-related information contained in electronic health records [EHRs across county councils. This demands interoperability in terms of structured and standardized data. Headings in EHR could also be a part of structured and standardized data. The aim was to study to what extent terminology is shared and standardized across county councils in Sweden. Headings from three county councils were analyzed to see to what extent they were shared and to what extent they corresponded to concepts in SNOMED CT and the National Board of Health and Welfare’s term dictionary [NBHW’s TD. In total 41% of the headings were shared across two or three county councils. A third of the shared headings corresponded to concepts in SNOMED CT. Further, an eighth of the shared headings corresponded to concepts in NBHW’s TD. The results showed that the extent of shared and standardized terminology in terms of headings across the studied three county councils were negligible.
Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
The communication between health information systems of hospitals and primary care organizations is currently an important challenge to improve the quality of clinical practice and patient safety. However, clinical information is usually distributed among several independent systems that may be syntactically or semantically incompatible. This fact prevents healthcare professionals from accessing clinical information of patients in an understandable and normalized way. In this work, we address the semantic interoperability of two EHR standards: OpenEHR and ISO EN 13606. Both standards follow the dual model approach which distinguishes information and knowledge, this being represented through archetypes. The solution presented here is capable of transforming OpenEHR archetypes into ISO EN 13606 and vice versa by combining Semantic Web and Model-driven Engineering technologies. The resulting software implementation has been tested using publicly available collections of archetypes for both standards.
In the past decade, feature-based design and manufacturing has gained some momentum in various engineering domains to represent and reuse semantic patterns with effective applicability. However, the actual scope of feature application is still very limited. Semantic Modeling and Interoperability in Product and Process Engineering provides a systematic solution for the challenging engineering informatics field aiming at the enhancement of sustainable knowledge representation, implementation and reuse in an open and yet practically manageable scale. This semantic modeling technology supports uniform, multi-facet and multi-level collaborative system engineering with heterogeneous computer-aided tools, such as CADCAM, CAE, and ERP. This presented unified feature model can be applied to product and process representation, development, implementation and management. Practical case studies and test samples are provided to illustrate applications which can be implemented by the readers in real-world scenarios. �...
Wright, D. J.; Lassoued, Y.; Dwyer, N.; Haddad, T.; Bermudez, L. E.; Dunne, D.
Coastal mapping plays an important role in informing marine spatial planning, resource management, maritime safety, hazard assessment and even national sovereignty. As such, there is now a plethora of data/metadata catalogs, pre-made maps, tabular and text information on resource availability and exploitation, and decision-making tools. A recent trend has been to encapsulate these in a special class of web-enabled geographic information systems called a coastal web atlas (CWA). While multiple benefits are derived from tailor-made atlases, there is great value added from the integration of disparate CWAs. CWAs linked to one another can query more successfully to optimize planning and decision-making. If a dataset is missing in one atlas, it may be immediately located in another. Similar datasets in two atlases may be combined to enhance study in either region. *But how best to achieve semantic interoperability to mitigate vague data queries, concepts or natural language semantics when retrieving and integrating data and information?* We report on the development of a new prototype seeking to interoperate between two initial CWAs: the Marine Irish Digital Atlas (MIDA) and the Oregon Coastal Atlas (OCA). These two mature atlases are used as a testbed for more regional connections, with the intent for the OCA to use lessons learned to develop a regional network of CWAs along the west coast, and for MIDA to do the same in building and strengthening atlas networks with the UK, Belgium, and other parts of Europe. Our prototype uses semantic interoperability via services harmonization and ontology mediation, allowing local atlases to use their own data structures, and vocabularies (ontologies). We use standard technologies such as OGC Web Map Services (WMS) for delivering maps, and OGC Catalogue Service for the Web (CSW) for delivering and querying ISO-19139 metadata. The metadata records of a given CWA use a given ontology of terms called local ontology. Human or machine
Full Text Available In e-Learning systems, tutor plays a very important role to support learners, and guarantee a learning of quality. A successful collaboration between learners and their tutor requires the use of communication tools. Thanks to their flexibility in terms of time, the asynchronous tools as discussion forum are the most used. However this type of tools generates a great mass of messages making tutoring an operation complex to manage, hence the need of a classification tool of messages. We proposed in a first step a semantics classification tool, which is based on the LSA and thesaurus. The possibility that ontology provides to overcome the limitations of the thesaurus encouraged us to use it to control our vocabulary. By the way of our proposed selection algorithm, the OWL ontology is queried to generate new terms which are used to build the LSA matrix. The integration of formal OWL ontology provides a highly relevant semantic classification of messages, and the reuse by other applications of ontological knowledge base is also guaranteed. The interoperability and the knowledge exchange between systems are also ensured by ontology integrated. In order to ensure its reuse and interoperability with systems which requesting for its service of classification, the implementation of our semantic classifier tool basing on the SOA is adopted and it will be explained and tested in this work.
Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O
In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.
Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril
This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments.
Ibrahim, Ahmed; Bucur, Anca; Perez-Rey, David; Alonso, Enrique; de Hoog, Matthy; Dekker, Andre; Marshall, M Scott
This paper describes the data transformation pipeline defined to support the integration of a new clinical site in a standards-based semantic interoperability environment. The available datasets combined structured and free-text patient data in Dutch, collected in the context of radiation therapy in several cancer types. Our approach aims at both efficiency and data quality. We combine custom-developed scripts, standard tools and manual validation by clinical and knowledge experts. We identified key challenges emerging from the several sources of heterogeneity in our case study (systems, language, data structure, clinical domain) and implemented solutions that we will further generalize for the integration of new sites. We conclude that the required effort for data transformation is manageable which supports the feasibility of our semantic interoperability solution. The achieved semantic interoperability will be leveraged for the deployment and evaluation at the clinical site of applications enabling secondary use of care data for research. This work has been funded by the European Commission through the INTEGRATE (FP7-ICT-2009-6-270253) and EURECA (FP7-ICT-2011-288048) projects.
Garde, Sebastian; Chen, Rong; Leslie, Heather; Beale, Thomas; McNicoll, Ian; Heard, Sam
Formal modeling of clinical content that can be made available internationally is one of the most promising pathways to semantic interoperability of health information. Drawing on the extensive experience from openEHR archetype research and implementation work, we present the latest research and development in this area to improve semantic interoperability of Electronic Health Records (EHRs) using openEHR (ISO 13606) archetypes. Archetypes as the formal definition of clinical content need to be of high technical and clinical quality. We will start with a brief introduction of the openEHR architecture followed by presentations on specific topics related to the management of a wide range of clinical knowledge artefacts. We will describe a web-based review process for archetypes that enables international involvement and ensures that released archetypes are technically and clinically correct. Tools for validation of archetypes will be presented, along with templates and compliance templates. All this in combination enables the openEHR computing platform to be the foundation for safely sharing the information clinicians need, using this information within computerized clinical guidelines, for decision support as well as migrating legacy data.
Evelyn Johanna Sophia Hovenga
Full Text Available En el presente artículo se examina de manera general las relaciones entre los dirigentes gubernamentales de las políticas de salud, de los proveedores de cuidado en salud y la adopción de las informaciones de cuidado en salud, así como de las tecnologías de comunicación y conocimiento. Esas tecnologías incluyen la adopción de estructuras de lenguaje nacional de salud y los patrones de informática en salud. Reflexiones esas que están basadas en las observaciones de los autores y en la participación internacional en el desarrollo de los patrones y en el desarrollo e implantación durante muchos años de las Tecnologías de Información y Comunicación Guvernamentales. Un considerable número de conceptos críticos parece ser mal comprendido por los responsables por la tomada de desiciones claves o, alternativamente, por las agendas políticas y por la necesidad de cuidar de una variedad de intereses propios que continuan dominando. Se concluye que nosotros debemos establecer y promover activamente un sólido ejemplo profesional para la adopción de una estrategia nacional de informática en salud que esté basada en la mejor evidencia científica disponible para apoyar un sistema de salud sustentable.
Plini, Paolo; De Santis, Valentina; Uricchio, Vito F; De Carlo, Dario; D'Arpa, Stefania; De Martino, Monica; Albertoni, Riccardo
The GIIDA project aims to develop a digital infrastructure for the spatial information within CNR. It is foreseen to use semantic-oriented technologies to ease information modeling and connecting, according to international standards like the ISO/IEC 11179. Complex information management systems, like GIIDA, will take benefit from the use of terminological tools like thesauri that make available a reference lexicon for the indexing and retrieval of information. Within GIIDA the goal is to make available the EARTh thesaurus (Environmental Applications Reference Thesaurus), developed by the CNR-IIA-EKOLab. A web-based software, developed by the CNR-Water Research Institute (IRSA) was implemented to allow consultation and utilization of thesaurus through the web. This service is a useful tool to ensure interoperability between thesaurus and other systems of the indexing, with, the idea of cooperating to develop a comprehensive system of knowledge organization, that could be defined integrated, open, multi-functi...
Blobel, Bernd; Kalra, Dipak; Koehn, Marc; Lunn, Ken; Pharow, Peter; Ruotsalainen, Pekka; Schulz, Stefan; Smith, Barry
As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. The workshop is a joint activity of the EFMI Working Groups "Electronic Health Records" and "Security, Safety and Ethics".
Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio
The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695
Full Text Available The Internet-of-Things (IoT is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds.
Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio
The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds.
would considerably alter the current privacy setting.3 First, the current Directive would be replaced with a Regulation, achieving EU-‐wide harmonization. Second, the scope of the instrument would be widened and the provisions made more precise. Third, the use of consent for data processing would...
Aso, Santiago; Perez-Rey, David; Alonso-Calvo, Raul; Rico-Diez, Antonio; Bucur, Anca; Claerhout, Brecht; Maojo, Victor
Current post-genomic clinical trials in cancer involve the collaboration of several institutions. Multi-centric retrospective analysis requires advanced methods to ensure semantic interoperability. In this scenario, the objective of the EU funded INTEGRATE project, is to provide an infrastructure to share knowledge and data in post-genomic breast cancer clinical trials. This paper presents the process carried out in this project, to bind domain terminologies in the area, such as SNOMED CT, with the HL7 v3 Reference Information Model (RIM). The proposed terminology binding follow the HL7 recommendations, but should also consider important issues such as overlapping concepts and domain terminology coverage. Although there are limitations due to the large heterogeneity of the data in the area, the proposed process has been successfully applied within the context of the INTEGRATE project. An improvement in semantic interoperability of patient data from modern breast cancer clinical trials, aims to enhance the clinical practice in oncology.
Honko, Harri; Andalibi, Vafa; Aaltonen, Timo; Parak, Jakub; Saaranen, Mika; Viik, Jari; Korhonen, Ilkka
Novel health monitoring devices and applications allow consumers easy and ubiquitous ways to monitor their health status. However, technologies from different providers lack both technical and semantic interoperability and hence the resulting health data are often deeply tied to a specific service, which is limiting its reusability and utilization in different services. We have designed a Wellness Warehouse Engine (W2E) that bridges this gap and enables seamless exchange of data between different services. W2E provides interfaces to various data sources and makes data available via unified representational state transfer application programming interface to other services. Importantly, it includes Unifier--an engine that allows transforming input data into generic units reusable by other services, and Analyzer--an engine that allows advanced analysis of input data, such as combining different data sources into new output parameters. In this paper, we describe the architecture of W2E and demonstrate its applicability by using it for unifying data from four consumer activity trackers, using a test base of 20 subjects each carrying out three different tracking sessions. Finally, we discuss challenges of building a scalable Unifier engine for the ever-enlarging number of new devices.
Headayetullah, Md; Biswas, Sanjay; Puthal, B
This paper mainly depicts the conceptual overview of vertical integration, semantic interoperability architecture such as Educational Sector Architectural Framework (ESAF) for New Zealand government and different interoperability framework solution for digital government. In this paper, we try to develop a secure information sharing approach for digital government to improve home land security. This approach is a role and cooperation based approach for security personnel of different government departments. In order to run any successful digital government of any country in the world, it is necessary to interact with their citizen and to share secure information via different network among the citizen or other government. Consequently, in order to smooth the progress of users to cooperate with and share information without darkness and flawlessly transversely different networks and databases universally, a safe and trusted information-sharing environment has been renowned as a very important requirement and t...
Full Text Available The need for high-quality out-of-hospital healthcare is a known socioeconomic problem. Exploiting ICT's evolution, ad-hoc telemedicine solutions have been proposed in the past. Integrating such ad-hoc solutions in order to cost-effectively support the entire healthcare cycle is still a research challenge. In order to handle the heterogeneity of relevant information and to overcome the fragmentation of out-of-hospital instrumentation in person-centric healthcare systems, a shared and open source interoperability component can be adopted, which is ontology driven and based on the semantic web data model. The feasibility and the advantages of the proposed approach are demonstrated by presenting the use case of real-time monitoring of patients' health and their environmental context.
Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E
Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information.
Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E.
Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information. PMID:25954452
systems should support user-defined collaborative extensions to ontologies as the semantics of data sources change or evolve. Research in folksonomy ...Reliability, responsibility, accuracy Trust computation representation Folksonomy as emerging semantics Automated alignment Semi Automated alignment...potential short term investment: • Prove via demonstration activities: o SI use of Domain Knowledge via Ontology (Expressiveness, Folksonomy
always been there for me and gave me a beautiful model of hard work and perseverance. xv THIS PAGE INTENTIONALLY LEFT...efforts on Data Distribution Services (DDS) for its applicability to military IT/C2 systems operating in a denied environment. DDS is advertised ...interoperability is simply the ability to send signals or bytes through 5 a reliable physical connection. Technical integration is more feasible and less complex
Madin, J.; Bowers, S.; Jones, M.; Schildhauer, M.
interoperability by describing the semantics of data at the level of observation and measurement (rather than the traditional focus at the level of the data set) and will define the necessary specifications and technologies to facilitate semantic interpretation and integration of observational data for the environmental sciences. As such, this initiative will focus on unifying the various existing approaches for representing and describing observation data (e.g., SEEK's Observation Ontology, CUAHSI's Observation Data Model, NatureServe's Observation Data Standard, to name a few). Products of this initiative will be compatible with existing standards and build upon recent advances in knowledge representation (e.g., W3C's recommended Web Ontology Language, OWL) that have demonstrated practical utility in enhancing scientific communication and data interoperability in other communities (e.g., the genomics community). A community-sanctioned, extensible, and unified model for observational data will support metadata standards such as EML while reducing the "babel" of scientific dialects that currently impede effective data integration, which will in turn provide a strong foundation for enabling cross-disciplinary synthetic research in the ecological and environmental sciences.
Cialone, Claudia; Stock, Kristin
EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of
Full Text Available This paper mainly depicts the conceptual overview of vertical integration, semantic interoperability architecture such as Educational Sector Architectural Framework (ESAF for New Zealand governmentand different interoperability framework solution for digital government. In this paper, we try to develop a secure information sharing approach for digital government to improve home land security. This approach is a role and cooperation based approach for security personnel of different government departments. In order to run any successful digital government of any country in the world, it is necessary to interact with their citizen and to share secure information via different network among the citizen or other government. Consequently, in order to smooth the progress of users to cooperate with and share information without darkness and flawlessly transversely different networks and databases universally, a safe and trusted information-sharing environment has been renowned as a very important requirement and to press forward homeland security endeavor. The key incentive following this research is to put up a secure and trusted information-sharing approach for government departments. This paper presents a proficient function and teamwork based information sharing approach for safe exchange of hush-hush and privileged information amid security personnels and government departments inside the national boundaries by means of public key cryptography. The expanded approach makes use of cryptographic hash function; public key cryptosystem and a unique and complex mapping function for securely swapping over secret information. Moreover, the projected approach facilitates privacy preserving information sharing with probable restrictions based on the rank of the security personnels. The projected function and collaboration based information sharing approach ensures protected and updated information sharing between security personnels and government
Powell, James E [Los Alamos National Laboratory; Collins, Linn M [Los Alamos National Laboratory; Martinez, Mark L B [Los Alamos National Laboratory
In certain types of 'slow burn' emergencies, careful accumulation and evaluation of information can offer a crucial advantage. The SARS outbreak in the first decade of the 21st century was such an event, and ad hoc journal clubs played a critical role in assisting scientific and technical responders in identifying and developing various strategies for halting what could have become a dangerous pandemic. This research-in-progress paper describes a process for leveraging emerging semantic web and digital library architectures and standards to (1) create a focused collection of bibliographic metadata, (2) extract semantic information, (3) convert it to the Resource Description Framework /Extensible Markup Language (RDF/XML), and (4) integrate it so that scientific and technical responders can share and explore critical information in the collections.
Semantic IS (Information Systems) standards are essential for achieving interoperability between organizations. However a recent survey suggests that not the full benefits of standards are achieved, due to the quality issues. This paper presents a quality model for semantic IS standards, that should
approval of the RTA Information Management Systems Branch is required for more than one copy to be made or an extract included in another publication...of knowledge-based methods based on ontologies for the bridging of semantic gaps between different systems. Finally, ‘Battle Management Language...sporadique et quelques projets de référence ont été créés mais d’une part, des recherches approfondies s’avèrent nécessaires et d’autre part, le
Fernandez-Breis, Jesualdo Tomas; Menarguez-Tortosa, Marcos; Martinez-Costa, Catalina; Fernandez-Breis, Eneko; Herrero-Sempere, Jose; Moner, David; Sanchez, Jesus; Valencia-Garcia, Rafael; Robles, Montserrat
Archetypes facilitate the sharing of clinical knowledge and therefore are a basic tool for achieving interoperability between healthcare information systems. In this paper, a Semantic Web System for Managing Archetypes is presented. This system allows for the semantic annotation of archetypes, as well for performing semantic searches. The current system is capable of working with both ISO13606 and OpenEHR archetypes.
Charalabidis, Yannis; Lampathaki, Fenareti; Askounis, Dimitris
As digital infrastructures increase their presence worldwide, following the efforts of governments to provide citizens and businesses with high-quality one-stop services, there is a growing need for the systematic management of those newly defined and constantly transforming processes and electronic documents. E-government Interoperability Frameworks usually cater to the technical standards of e-government systems interconnection, but do not address service composition and use by citizens, businesses, or other administrations.
This thesis contribute to the application of semantic web concepts to achieve traceability in cross-disciplinary development projects. Specifically it has been focused in two specific fields such as mechanical engineering and software engineering.
Diggelen, J. van
Software agents sharing the same ontology can exchange their knowledge fluently as their knowledge representations are compatible with respect to the concepts regarded as relevant and with respect to the names given to these concepts. However, in open heterogeneous multi-agent systems, this scenario
Folmer, E.J.A.; Oude Luttighuis, P.H.W.M.; Hillegersberg, J. van
Quality of semantic standards is unadressed in current research while there is an explicit need from standard developers. The business importance is evident since quality of standards will have impact on its diffusion and achieved interoperability in practice. An instrument to measure the quality of
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
Nieves Sánchez, Juan Carlos; Ortega de Mues, Mariano; Espinoza, Angelina; Rodríguez Álvarez, Daniel
According to the Electric Power Research Institute (EPRI) a common semantics model is necessary for achieving interoperability in the Smart Grid vision. In this paper, we present an outline of two influential International Electrotech-nical Commission Standards (CIM and IEC 61850) for building a common semantic model in a Smart Grid vision. In addition, we revise two representative approaches suggested by EPRI for harmonizing these standards in a common semantic model. The pros and cons betwe...
Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.
Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.
Bahga, Arshdeep; Madisetti, Vijay K
We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.
The search for the Holy Grail in achieving interoperability of business processes, services and semantics continues with every new type or search for the Silver Bullet. Most approaches towards interoperability either are focusing narrowly on the simplistic notion using technology supporting a cowboy-style development without much regard to metadata or semantics. At the same time, the distortions on semantics created by many of current modeling paradigms and approaches - including the disharmony created by multiplicity of parallel approaches to standardization - are not helping us resolve the real issues facing knowledge and semantics management. This paper will address some of the issues facing us, like: What have we achieved? Where did we go wrong? What are we doing right? - providing an ipso-facto encapsulated candid snapshot on an approach to harmonizing our approach to interoperability, and propose a common platform to support Business Processes, Services and Semantics.
Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin
Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a "system of systems" could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method.
Full Text Available Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a “system of systems” could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks. Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method.
Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin
Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a “system of systems” could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965
@@ In the last article I wrote I discussed the requirements an engineering specification must meet to become a successful standard; and the role of specifications and standards in achieving interoperability.
Wilson, B. D.; Manipon, G.; Xing, Z.
Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.
Higuera Portilla, Jorge Eduardo; Polo Cantero, José
The syntactic and semantic interoperability is a challenge of the Wireless Sensor Networks (WSN) with smart sensors in pervasive computing environments to increase their harmonization in a wide variety of applications. This chapter contains a detailed description of interoperability in heterogeneous WSN using the IEEE 1451 standard. This work focuses on personal area networks (PAN) with smart sensors and actuators. Also, a technical, syntactic and semantic levels of interoperability based on ...
measurement methods (non-maturity model-based methods in italics, maturity-based methods in boldface) .................... 11 Figure 3. LISI ...OIM LISI , IAM QoIM SoIM Figure 2. Chronology of published interoperability measurement methods (non-maturity model- based methods in italics... method called Levels of Information System Interoperability ( LISI ) (Figure 3). (Ibid) This method was eventually formalized and mandated in CJCSI
Basic semantic architecture of interoperability for the intelligent distribution in the CFE electrical system; Arquitectura base de interoperabilidad semantica para el sistema electrico de distribucion inteligente en la CFE
Espinosa Reza, Alfredo; Garcia Mendoza, Raul; Borja Diaz, Jesus Fidel; Sierra Rodriguez, Benjamin [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
The physical and logical architecture of the interoperability platform defined for the distribution management systems (DMS), of the Distribution Subdivision of Comision Federal de Electricidad (CFE) in Mexico is presented. The adopted architecture includes the definition of a technological platform to manage the exchange of information between systems and applications, sustained in the Model of Common Information (CIM), established in norms IEC61968 and IEC 61970. The architecture based on SSOA (Semantic Services Oriented Architecture), on EIB (Enterprise Integration Bus) and on GID (Generic Interface Definition) is presented, as well as the sequence to obtain the interoperability of systems related to the Distribution Management of the of electrical energy in Mexico. Of equal way it is described the process to establish a Semantic Model of the Electrical System of Distribution (SED) and the creation of instances CIM/XML, oriented to the interoperability of the information systems in the DMS scope, by means of the interchange of messages conformed and validated according to the structure obtained and agreed to the rules established by Model CIM. In this way, the messages and the information interchanged among systems, assure the compatibility and correct interpretation in an independent way to the developer, mark or manufacturer of the system source and destiny. The primary target is to establish the infrastructure semantic base of interoperability, cradle in standards that sustain the strategic definition of an Electrical System of Intelligent Distribution (SEDI) in Mexico. [Spanish] Se presenta la arquitectura fisica y logica de la plataforma de interoperabilidad definida para los sistemas de gestion de la distribucion (DMS por sus siglas en ingles), de la Subdireccion de Distribucion de la Comision Federal de Electricidad (CFE) en Mexico. La arquitectura adoptada incluye la definicion de una plataforma tecnologica para gestionar el intercambio de informacion
Under the Web environment, the tools of knowledge organization are mainly al kinds of knowledge organization system (KOS), and interoperability of KOS has been the hot problem between the researches and applications. Based on the analysis of knowledge organization tools, the paper mainly discusses the method of Semantic Interoperability among traditional knowledge organization tools and modern knowledge organization tools represented by ontology, in order to provide reference to various types of information agencies digital resource integrating, resource sharing and knowledge services.% 网络环境下，知识组织所用的工具主要是各类知识组织系统，同时知识组织系统之间的互操作已成为知识组织研究和应用中的热点问题。文章在分析知识组织工具的基础上重点阐述了传统知识组织工具以及本体为代表的现代知识组织工具的语义互操作方法，以期为图书馆等信息机构在进行数字资源整合、资源共享和知识服务时提供参考。
Frank Doheny; Paul Jacob; Maria Maleshkova; Owen Molloy; Robert Stewart; Sean Kennedy
Semantic Web Services (SWS) are Web Service (WS) descriptions augmented with semantic information. SWS enable intelligent reasoning and automation in areas such as service discovery, composition, mediation, ranking and invocation. This paper applies SWS to a previous protocol adapter which, operating within clearly defined constraints, maps SOAP Web Services to RESTful HTTP format. However, in the previous adapter, the configuration element is manual and the latency implications are locally b...
Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.
Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
Simona Angela Apostol
Full Text Available #Understanding the importance that the electronic medical health records system has, with its various structural types and grades, has led to the elaboration of a series of standards and quality control methods, meant to control its functioning. In time, the electronic health records system has evolved along with the medical data’s change of structure. Romania has not yet managed to fully clarify this concept, various definitions still being encountered, such as “Patient’s electronic chart”, “Electronic health file”. A slow change from functional interoperability (OSI level 6 to semantic interoperability (level 7 is being aimed at the moment. This current article will try to present the main electronic files models, from a functional interoperability system’s possibility to be created perspective.
Stewart, John [Tennessee Valley Authority, Knoxville, TN (United States); Halbgewachs, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chavez, Adrian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Rhett [Schweitzer Engineering Laboratories, Chattanooga, TN (United States); Teumim, David [Teumim Technical, Allentown, PA (United States)
The manner in which the control systems are being designed and operated in the energy sector is undergoing some of the most significant changes in history due to the evolution of technology and the increasing number of interconnections to other system. With these changes however come two significant challenges that the energy sector must face; 1) Cyber security is more important than ever before, and 2) Cyber security is more complicated than ever before. A key requirement in helping utilities and vendors alike in meeting these challenges is interoperability. While interoperability has been present in much of the discussions relating to technology utilized within the energy sector and especially the Smart Grid, it has been absent in the context of cyber security. The Lemnos project addresses these challenges by focusing on the interoperability of devices utilized within utility control systems which support critical cyber security functions. In theory, interoperability is possible with many of the cyber security solutions available to utilities today. The reality is that the effort required to achieve cyber security interoperability is often a barrier for utilities. For example, consider IPSec, a widely-used Internet Protocol to define Virtual Private Networks, or tunnels , to communicate securely through untrusted public and private networks. The IPSec protocol suite has a significant number of configuration options and encryption parameters to choose from, which must be agreed upon and adopted by both parties establishing the tunnel. The exercise in getting software or devices from different vendors to interoperate is labor intensive and requires a significant amount of security expertise by the end user. Scale this effort to a significant number of devices operating over a large geographical area and the challenge becomes so overwhelming that it often leads utilities to pursue solutions from a single vendor. These single vendor solutions may inadvertently lock
Full Text Available Semantic Web Services (SWS are Web Service (WS descriptions augmented with semantic information. SWS enable intelligent reasoning and automation in areas such as service discovery, composition, mediation, ranking and invocation. This paper applies SWS to a previous protocol adapter which, operating within clearly defined constraints, maps SOAP Web Services to RESTful HTTP format. However, in the previous adapter, the configuration element is manual and the latency implications are locally based. This paper applies SWS technologies to automate the configuration element and the latency tests are conducted in a more realistic Internet based setting.
Full Text Available In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used to face the great challenge of representing the semantics of data, in order to bring the actual web to its full power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is it self based on the combination of lexical and semantic similarity measures.
Jozwik, Sara L.; Douglas, Karen H.
This study examined how explicit instruction in semantic ambiguity detection affected the reading comprehension and metalinguistic awareness of five English learners (ELs) with learning difficulties (e.g., attention deficit/hyperactivity disorder, specific learning disability). A multiple probe across participants design (Gast & Ledford, 2010)…
Schaeben, Helmut; Gabriel, Paul; Gietzel, Jan; Le, Hai Ha
GST - Geosciences in space and time is being developed and implemented as hub to facilitate the exchange of spatially and temporally indexed multi-dimensional geoscience data and corresponding geomodels amongst partners. It originates from TUBAF's contribution to the EU project "ProMine" and its perspective extensions are TUBAF's contribution to the actual EU project "GeoMol". As of today, it provides basic components of a geodata infrastructure as required to establish interoperability with respect to geosciences. Generally, interoperability means the facilitation of cross-border and cross-sector information exchange, taking into account legal, organisational, semantic and technical aspects, cf. Interoperability Solutions for European Public Administrations (ISA), cf. http://ec.europa.eu/isa/. Practical interoperability for partners of a joint geoscience project, say European Geological Surveys acting in a border region, means in particular provision of IT technology to exchange spatially and maybe additionally temporally indexed multi-dimensional geoscience data and corresponding models, i.e. the objects composing geomodels capturing the geometry, topology, and various geoscience contents. Geodata Infrastructure (GDI) and interoperability are objectives of several inititatives, e.g. INSPIRE, OneGeology-Europe, and most recently EGDI-SCOPE to name just the most prominent ones. Then there are quite a few markup languages (ML) related to geographical or geological information like GeoSciML, EarthResourceML, BoreholeML, ResqML for reservoir characterization, earth and reservoir models, and many others featuring geoscience information. Several Web Services are focused on geographical or geoscience information. The Open Geospatial Consortium (OGC) promotes specifications of a Web Feature Service (WFS), a Web Map Service (WMS), a Web Coverage Serverice (WCS), a Web 3D Service (W3DS), and many more. It will be clarified how GST is related to these initiatives, especially
Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation.
Chatzitoulousis, Antonios; Efraimidis, Pavlos S.; Athanasiadis, I.N.
The Atlas Metadata System (AMS) employs semantic web annotation techniques in order to create an interoperable information annotation and retrieval platform for the tourism sector. AMS adopts state-of-the-art metadata vocabularies, annotation techniques and semantic web technologies. Interoperabilit
Full Text Available In the recent years, Cloud Computing has been one of the top ten new technologies which provides various services such as software, platform and infrastructure for internet users. The Cloud Computing is a promising IT paradigm which enables the Internet evolution into a global market of collaborating services. In order to provide better services for cloud customers, cloud providers need services that are in cooperation with other services. Therefore, Cloud Computing semantic interoperability plays a key role in Cloud Computing services. In this paper, we address interoperability issues in Cloud Computing environments. After a description of Cloud Computing interoperability from different aspects and references, we describe two architectures of cloud service interoperability. Architecturally, we classify existing interoperability challenges and we describe them. Moreover, we use these aspects to discuss and compare several interoperability approaches.
An information representation framework is designed to overcome the problem of semantic heterogeneity in distributed environments in this paper. Emphasis is placed on establishing an XML-oriented semantic data model and the mapping between XML data based on a global ontology semantic view. The framework is implemented in Web Service, which enhances information process efficiency, accuracy and the semantic interoperability as well.
Bosca, Diego; Moner, David; Maldonado, Jose Alberto; Robles, Montserrat
Messaging standards, and specifically HL7 v2, are heavily used for the communication and interoperability of Health Information Systems. HL7 FHIR was created as an evolution of the messaging standards to achieve semantic interoperability. FHIR is somehow similar to other approaches like the dual model methodology as both are based on the precise modeling of clinical information. In this paper, we demonstrate how we can apply the dual model methodology to standards like FHIR. We show the usefulness of this approach for data transformation between FHIR and other specifications such as HL7 CDA, EN ISO 13606, and openEHR. We also discuss the advantages and disadvantages of defining archetypes over FHIR, and the consequences and outcomes of this approach. Finally, we exemplify this approach by creating a testing data server that supports both FHIR resources and archetypes.
The cooperation of different processes may be lost by mistake when a protocol is executed. The protocol cannot be normally operated under this condition. In this paper, the self fault-tolerance of protocols is discussed, and a semanticsbased approach for achieving self fault-tolerance of protocols is presented. Some main characteristics of self fault-tolerance of protocols concerning liveness, nontermination and infinity are also presented. Meanwhile, the sufficient and necessary conditions for achieving self fault-tolerance of protocols are given. Finally, a typical protocol that does not satisfy the self fault-tolerance is investigated, and a new redesign version of this existing protocol using the proposed approach is given.
Wicaksana, I Wayan Simri
Unlike the traditional model of information pull, matchmaking is base on a cooperative partnership between information providers and consumers, assisted by an intelligent facilitator (the matchmaker). Refer to some experiments, the matchmaking to be most useful in two different ways: locating information sources or services that appear dynamically and notification of information changes. Effective information and services sharing in distributed such as P2P based environments raises many challenges, including discovery and localization of resources, exchange over heterogeneous sources, and query processing. One traditional approach for dealing with some of the above challenges is to create unified integrated schemas or services to combine the heterogeneous sources. This approach does not scale well when applied in dynamic distributed environments and has many drawbacks related to the large numbers of sources. The main issues in matchmaking are how to represent advertising and request, and how to calculate poss...
Mobile devices offer integrated functionality to browse, phone, play music, and watch video. Moreover, these devices have sufficient memory and processing power to run (small) applications based on for instance Google Android and the iPhone/iPod OS. As such, they support for instance Google Earth to
Full Text Available Semantic web technologies have the potential to simplify heterogeneous data integration using explicit semantics. The paper proposes a framework for building intelligent interoperable application for employment exchange system by collaborating among distributed heterogeneous data models using semantic web technologies. The objective of the application development using semantic technologies is to provide a better inference for the query against dynamic collection of information in collaborating data models. The employment exchange system provides interface for the users to register their details thereby managing the knowledge base dynamically. Semantic server transforms the queries from the employer and jobseeker semantically for possible integration of the two heterogeneous data models to drive intelligent inference. The semantic agent reconcile the syntax and semantic conflicts exists among the contributing ontologies in different granularity levels and performs automatic integration of two source ontologies and gives better response to the user. The benefits of building interoperable application using semantic web are data sharing, reusing the knowledge, best query response, independent maintenance of the model, extending the application for extra features.
Full Text Available High quality and comfortable online delivery of governmental services often requires the seamless exchange of data between two or more government agencies. Smooth data exchange, in turn, requires interoperability of the databases and workflows in the agencies involved. Interoperability (IOP is a complex issue covering purely technical aspects such as transmission protocols and data exchange formats, but also content-related semantic aspects such as identifiers and the meaning of codes as well as organizational, contractual or legal issues. Starting from IOP frameworks which provide classifications of what has to be standardized, this paper, based on an ongoing research project, adopts a political and managerial view and tries to clarify the governance of achieving IOP, i.e. where and by whom IOPstandards are developed and established and how they are put into operation. By analyzing 32 cases of successful implementation of IOP in E-Government services within the European Union empirical indicators for different aspects of governance are proposed and applied to develop an empirical taxonomy of different types of IOP governance which can be used for future comparative research regarding success factors, barriers etc.
Full Text Available Sensors play an increasingly critical role in capturing and distributing observation of phenomena in our environment. The Semantic Sensor Web enables interoperability to support various applications that use data made available by semantically heterogeneous sensor services. However, several challenges still need to be addressed to achieve this vision. More particularly, mechanisms that can support context-aware semantic mapping that adapts to dynamic metadata of sensors are required. Semantic mapping for Sensor Web is required to support sensor data fusion, sensor data discovery and retrieval, and automatic semantic annotation, to name only a few applications. This paper presents a context-aware ontology-based semantic mediation service for heterogeneous sensor services. The semantic mediation service is context-aware and dynamic because it takes into account the real-time variability of thematic, spatial and temporal features that describe sensor data in different contexts. The semantic mediation service integrates rule-based reasoning to support resolution of semantic heterogeneities. An application scenario is presented showing how the semantic mediation service can improve sensor data interpretation, reuse, and sharing in static and dynamic settings.
Sheth, A.; Henson, C.; Thirunarayan, K.
Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the Semantic Sensor Web (SSW)  proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. Kno.e.sis Center's approach to SSW is an evolutionary one. It adds semantic annotations to the existing standard sensor languages of the Sensor Web Enablement (SWE) defined by OGC. These annotations enhance primarily syntactic XML-based descriptions in OGC's SWE languages with microformats, and W3C's Semantic Web languages- RDF and OWL. In association with semantic annotation and semantic web capabilities including ontologies and rules, SSW supports interoperability, analysis and reasoning over heterogeneous multi-modal sensor data. In this presentation, we will also demonstrate a mashup with support for complex spatio-temporal-thematic queries  and semantic analysis that utilize semantic annotations, multiple ontologies and rules. It uses existing services (e.g., GoogleMap) and semantics enhanced SWE's Sensor Observation Service (SOS) over weather and road condition data from various sensors that are part of Ohio's transportation network. Our upcoming plans are to demonstrate end to end (heterogeneous sensor to application) semantics support and study scalability of SSW involving thousands of sensors to about a billion triples. Keywords: Semantic Sensor Web, Spatiotemporal thematic queries, Semantic Web Enablement, Sensor Observation Service  Amit Sheth, Cory Henson, Satya
Interoperability of tools usually refers to a combination of methods and techniques that address the problem of making a collection of tools to work together. In this study we survey different notions that are used in this context: interoperability, interaction and integration. We point out relation
Coalition-wide interoperability can be improved considerably by better harmonisation of all major information standardisation efforts within NATO. This notion is supported by the concept of dividing the NATO C3 information area into more or less independent “information interoperability domains”, co
Fung, N.L.S.; Jones, V.M.; Hermens, H.J.
Objectives: The main objective is to develop and validate a reference information model (RIM) to support semantic interoperability of pervasive telemedicine systems. The RIM is one component within a larger, computer-interpretable "MADE language" developed by the authors in the context of the MobiGu
Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...
Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...
Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...
Full Text Available Domain specific knowledge representation is achieved through the use of ontologies. The ontology model of software risk management is an effective approach for the intercommunion between people from teaching and learning community, the communication and interoperation among various knowledge oriented applications, and the share and reuse of the software. But the lack of formal representation tools for domain modeling results in taking liberties with conceptualization. This paper narrates an ontology based semantic knowledge representation mechanism and the architecture we proposed has been successfully implemented for the domain software riskmanagement.
Brodaric, Boyan; Booth, Nathaniel; Boisvert, Eric; Lucido, Jessica M.
Water data networks are increasingly being integrated to answer complex scientific questions that often span large geographical areas and cross political borders. Data heterogeneity is a major obstacle that impedes interoperability within and between such networks. It is resolved here for groundwater data at five levels of interoperability, within a Spatial Data Infrastructure architecture. The result is a pair of distinct national groundwater data networks for the United States and Canada, and a combined data network in which they are interoperable. This combined data network enables, for the first time, transparent public access to harmonized groundwater data from both sides of the shared international border.
Full Text Available Abstract Background Recent advances in Web and information technologies with the increasing decentralization of organizational structures have resulted in massive amounts of information resources and domain-specific services in Traditional Chinese Medicine. The massive volume and diversity of information and services available have made it difficult to achieve seamless and interoperable e-Science for knowledge-intensive disciplines like TCM. Therefore, information integration and service coordination are two major challenges in e-Science for TCM. We still lack sophisticated approaches to integrate scientific data and services for TCM e-Science. Results We present a comprehensive approach to build dynamic and extendable e-Science applications for knowledge-intensive disciplines like TCM based on semantic and knowledge-based techniques. The semantic e-Science infrastructure for TCM supports large-scale database integration and service coordination in a virtual organization. We use domain ontologies to integrate TCM database resources and services in a semantic cyberspace and deliver a semantically superior experience including browsing, searching, querying and knowledge discovering to users. We have developed a collection of semantic-based toolkits to facilitate TCM scientists and researchers in information sharing and collaborative research. Conclusion Semantic and knowledge-based techniques are suitable to knowledge-intensive disciplines like TCM. It's possible to build on-demand e-Science system for TCM based on existing semantic and knowledge-based techniques. The presented approach in the paper integrates heterogeneous distributed TCM databases and services, and provides scientists with semantically superior experience to support collaborative research in TCM discipline.
Full Text Available Although the integration of sensor-based information into analysis and decision making has been a research topic for many years, semantic interoperability has not yet been reached. The advent of user-generated content for the geospatial domain, Volunteered Geographic Information (VGI, makes it even more difficult to establish semantic integration. This paper proposes a novel approach to integrating conventional sensor information and VGI, which is exploited in the context of detecting forest fires. In contrast to common logic-based semantic descriptions, we present a formal system using algebraic specifications to unambiguously describe the processing steps from natural phenomena to value-added information. A generic ontology of observations is extended and profiled for forest fire detection in order to illustrate how the sensing process, and transformations between heterogeneous sensing systems, can be represented as mathematical functions and grouped into abstract data types. We discuss the required ontological commitments and a possible generalization.
Full Text Available Abstract Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs. The National Cancer Institute (NCI developed the cancer common ontologic representation environment (caCORE to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has
Hardin, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Stephan, Eric G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Corbin, Charles D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Widergren, Steven E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Through its Building Technologies Office (BTO), the United States Department of Energy’s Office of Energy Efficiency and Renewable Energy (DOE-EERE) is sponsoring an effort to advance interoperability for the integration of intelligent buildings equipment and automation systems, understanding the importance of integration frameworks and product ecosystems to this cause. This is important to BTO’s mission to enhance energy efficiency and save energy for economic and environmental purposes. For connected buildings ecosystems of products and services from various manufacturers to flourish, the ICT aspects of the equipment need to integrate and operate simply and reliably. Within the concepts of interoperability lie the specification, development, and certification of equipment with standards-based interfaces that connect and work. Beyond this, a healthy community of stakeholders that contribute to and use interoperability work products must be developed. On May 1, 2014, the DOE convened a technical meeting to take stock of the current state of interoperability of connected equipment and systems in buildings. Several insights from that meeting helped facilitate a draft description of the landscape of interoperability for connected buildings, which focuses mainly on small and medium commercial buildings. This document revises the February 2015 landscape document to address reviewer comments, incorporate important insights from the Buildings Interoperability Vision technical meeting, and capture thoughts from that meeting about the topics to be addressed in a buildings interoperability vision. In particular, greater attention is paid to the state of information modeling in buildings and the great potential for near-term benefits in this area from progress and community alignment.
Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.
Bare, James Christopher; Baliga, Nitin S
Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures--list, matrix, network, table and tuple--that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization.
@@ I am sure that there will be much discussion at the upcoming Baltic IT&T 2005 conference about standards and interoperability, and so I thought I would try to contribute to the debate with this, the first of four articles that I will write for this journal over the coming months.
Masahiko Nagai; Masafumi Ono; Ryosuke Shibasaki
The Ontology registry system is developed to collect, manage, and compare ontological informa-tion for integrating global observation data. Data sharing and data service such as support of metadata deign, structudng of data contents, support of text mining are applied for better use of data as data interop-erability. Semantic network dictionary and gazetteers are constructed as a trans-disciplinary dictionary. On-tological information is added to the system by digitalizing text based dictionaries, developing "knowledge writing tool" for experts, and extracting semantic relations from authodtative documents with natural lan-guage processing technique. The system is developed to collect lexicographic ontology and geographic ontology.
Nagai, M.; Ono, M.; Shibasaki, R.
Standardization organizations are working for syntactic and schematic level of interoperability. At the same time, semantic interoperability must be considered as a heterogeneous condition and also very diversified with a large-volume data. The ontology registry has been developed and ontological information such as technical vocabularies for earth observation has been collected for data interoperability arrangement. This is a very challenging method for earth observation data interoperability because collaboration or cooperation with scientists of different disciplines is essential for common understanding. Multiple semantic MediaWikis are applied to register and update technical vocabularies as a part of the ontology registry, which promises to be a useful tool for users. In order to invite contributions from the user community, it is necessary to provide sophisticated and easy-to-use tools and systems, such as table-like editor, reverse dictionary, and graph representation for sustainable development and usage of ontological information. Registered ontologies supply the reference information required for earth observation data retrieval. We proposed data/metadata search with ontology such as technical vocabularies and visualization of relations among dataset to very large scale and various earth observation data.
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and
Full Text Available Differing terminology and database structure hinders meaningful cross search of excavation datasets. Matching free text grey literature reports with datasets poses yet more challenges. Conventional search techniques are unable to cross search between archaeological datasets and Web-based grey literature. Results are reported from two AHRC funded research projects that investigated the use of semantic techniques to link digital archive databases, vocabularies and associated grey literature. STAR (Semantic Technologies for Archaeological Resources was a collaboration between the University of Glamorgan, Hypermedia Research Unit and English Heritage (EH. The main outcome is a research Demonstrator (available online, which cross searches over excavation datasets from different database schemas, including Raunds Roman, Raunds Prehistoric, Museum of London, Silchester Roman and Stanwick sampling. The system additionally cross searches over an extract of excavation reports from the OASIS index of grey literature, operated by the Archaeology Data Service (ADS. A conceptual framework provided by the CIDOC Conceptual Reference Model (CRM integrates the different database structures and the metadata automatically generated from the OASIS reports by natural language processing techniques. The methods employed for extracting semantic RDF representations from the datasets and the information extraction from grey literature are described. The STELLAR project provides freely available tools to reduce the costs of mapping and extracting data to semantic search systems such as the Demonstrator and to linked data representation generally. Detailed use scenarios (and a screen capture video provide a basis for a discussion of key issues, including cost-benefits, ontology modelling, mapping, terminology control, semantic implementation and information extraction issues. The scenarios show that semantic interoperability can be achieved by mapping and extracting
des systèmes 2d) Les passerelles de gestion de l’interopérabilité 3) La description des opérations communes au sein d’une coalition 4...de ces derniers, dès que l’on cherche à les faire coopérer, pose un difficile problème consécutif à leur hétérogénéité. La solution des passerelles
Guédria, Wided; Naudet, Yannick; Chen, David
Historically, progress occurs when entities communicate, share information and together create something that no one individually could do alone. Moving beyond people to machines and systems, interoperability is becoming a key factor of success in all domains. In particular, interoperability has become a challenge for enterprises, to exploit market opportunities, to meet their own objectives of cooperation or simply to survive in a growing competitive world where the networked enterprise is becoming a standard. Within this context, many research works have been conducted over the past few years and enterprise interoperability has become an important area of research, ensuring the competitiveness and growth of European enterprises. Among others, enterprises have to control their interoperability strategy and enhance their ability to interoperate. This is the purpose of the interoperability assessment. Assessing interoperability maturity allows a company to know its strengths and weaknesses in terms of interoperability with its current and potential partners, and to prioritise actions for improvement. The objective of this paper is to define a maturity model for enterprise interoperability that takes into account existing maturity models while extending the coverage of the interoperability domain. The assessment methodology is also presented. Both are demonstrated with a real case study.
Full Text Available In companies, the historically developed IT systems are mostly application islands. They always produce good results if the system's requirements and surroundings are not changed and as long as a system interface is not needed. With the ever increas-ing dynamic and globalization of the market, however, these IT islands are certain to collapse. Interoperability (IO is the bid of the hour, assuming the integration of users, data, applications and processes. In the following, important IO enablers such as ETL, EAI, and SOA will be examined on the basis of practica-bility. It will be shown that especially SOA produces a surge of interoperability that could rightly be referred to as IT evolution.
Halbgewachs, Ronald D.
With the Lemnos framework, interoperability of control security equipment is straightforward. To obtain interoperability between proprietary security appliance units, one or both vendors must now write cumbersome 'translation code.' If one party changes something, the translation code 'breaks.' The Lemnos project is developing and testing a framework that uses widely available security functions and protocols like IPsec - to form a secure communications channel - and Syslog, to exchange security log messages. Using this model, security appliances from two or more different vendors can clearly and securely exchange information, helping to better protect the total system. Simplify regulatory compliance in a complicated security environment by leveraging the Lemnos framework. As an electric utility, are you struggling to implement the NERC CIP standards and other regulations? Are you weighing the misery of multiple management interfaces against committing to a ubiquitous single-vendor solution? When vendors build their security appliances to interoperate using the Lemnos framework, it becomes practical to match best-of-breed offerings from an assortment of vendors to your specific control systems needs. The Lemnos project is developing and testing a framework that uses widely available open-source security functions and protocols like IPsec and Syslog to create a secure communications channel between appliances in order to exchange security data.
Zaschke, C.; Essendorfer, B.; Kerth, C.
To achieve knowledge superiority in today's operations interoperability is the key. Budget restrictions as well as the complexity and multiplicity of threats combined with the fact that not single nations but whole areas are subject to attacks force nations to collaborate and share information as appropriate. Multiple data and information sources produce different kinds of data, real time and non-real time, in different formats that are disseminated to the respective command and control level for further distribution. The data is most of the time highly sensitive and restricted in terms of sharing. The question is how to make this data available to the right people at the right time with the right granularity. The Coalition Shared Data concept aims to provide a solution to these questions. It has been developed within several multinational projects and evolved over time. A continuous improvement process was established and resulted in the adaptation of the architecture as well as the technical solution and the processes it supports. Coming from the idea of making use of existing standards and basing the concept on sharing of data through standardized interfaces and formats and enabling metadata based query the concept merged with a more sophisticated service based approach. The paper addresses concepts for information sharing to facilitate interoperability between heterogeneous distributed systems. It introduces the methods that were used and the challenges that had to be overcome. Furthermore, the paper gives a perspective how the concept could be used in the future and what measures have to be taken to successfully bring it into operations.
Full Text Available Elements of each system and their interrelationships can be represented using models simulating reality. Analysis of the problem with the use of modeling methods facilitates the identification of the problem, its diagnosis and assessment of data quality. In this study, the method of semantic modeling, based on the principles of system analysis was used. The thematic scope of the publication covers the use of these methods in order to develop a model of cadastral data in Poland. Real estate cadastre is one of the most important information systems in the world, based on the spatial data of geodetic and legal nature. It is therefore important to define appropriately its purpose, scope of data and the links between objects, subjects and their assigned rights. The methods of system analysis are widely used for this purpose around the world. By using of these methods the schematic diagram of the multi-purpose cadastre in Poland was developed, along with the model of sources and types of cadastral information. Particular attention was paid to modeling of the scope and content of spatial data. The basic assumption of the functionality of the cadastral system is the interoperability of data, from selected databases collecting information about cadastral objects with the use of thematic groups if the INSPIRE Directive. Spatial conditions of data interoperability of modeled multi-purpose cadastre based on current law conditions in Poland were presented in this paper.
Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet.
Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise
The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor
Federal Laboratory Consortium — The UGV Interoperability Lab provides the capability to verify vendor conformance against government-defined interoperability profiles (IOPs). This capability allows...
SUN Hongjun; FAN Yushun
Semantic extraction is essential for semantic interoperability in multi-enterprise business collabo-ration environments. Although many studies on semantic extraction have been carried out, few have focused on how to precisely and effectively extract semantics from multiple heterogeneous data schemas. This paper presents a semi-automatic semantic extraction method based on a neutral representation format (NRF) for acquiring semantics from heterogeneous data schemas. As a unified syntax-independent model, NRF re-moves all the contingencies of heterogeneous data schemas from the original data environment. Conceptual extraction and keyword extraction are used to acquire the semantics from the NRF. Conceptual extraction entails constructing a conceptual model, while keyword extraction seeks to obtain the metadata. An industrial case is given to validate the approach. This method has good extensibility and flexibility. The results show that the method provides simple, accurate, and effective semantic intereperability in multi-enterprise busi-ness collaboration environments.
Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.
The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by
Razavi, Mahsa; Aliee, Fereidoon Shams
Enterprise Architecture (EA) as a discipline with numerous and enterprise-wide models, can support decision making on enterprise-wide issues. In order to provide such support, EA models should be amenable to analysis of various utilities and quality attributes. This paper provides a method towards EA interoperability analysis. This approach is based on Analytical Hierarchy Process (AHP) and considers the situation of the enterprise in giving weight to the different criteria and sub criteria of each utility. It proposes a quantitative method of assessing Interoperability achievement of different scenarios using AHP based on the knowledge and experience of EA experts and domain experts, and helps in deciding between them. The applicability of the proposed approach is demonstrated using a practical case study.
Liang Hu; Jingyan Jiang; Jin Zhou; Kuo Zhao; Liang Chen; Huimin Lu
The Internet of Things is rapidly developing in recent years. A number of devices connect with the Internet. Hence, the interoperability, which is the access and interpretation of unambiguous data, is strongly needed by distributed and heterogeneous devices. The semantics promotes the interoperability in the Internet of Things using ontology to provide precise definition of concepts and relations. In this paper, we demonstrate the importance of the semantics in three aspects: firstly, the sem...
Suleman, Hussein; Fox, Edward
Explains the Open Archives Initiative (OAI) which was developed to solve problems of digital library interoperability on the World Wide Web. Topics include metadata; HTTP; XML; Dublin Core; the OAI Metadata Harvesting Protocol; data providers; service providers; reference linking; library policies; shared semantics; and name authority systems.…
PAN Le-yun; LIU Xiao-qiang; MA Fan-yuan
On the semantic web, data interoperability and ontology heterogeneity are becoming ever more important issues. To resolve these problems, multiple classification methods can be used to learn the matching between ontologies. The paper uses the general statistic classification method to discover category features in data instances and use the first-order learning algorithm FOIL to exploit the semantic relations among data instances. When using mulfistrategy learning approach, a central problem is the evaluation of multistrategy classifiers. The goal and the conditions of using multistrategy classifiers within ontology matching are different from the ones for general text classification. This paper describes the combination rule of multiple classifiers called the Best Outstanding Champion, which is suitable for heterogeneous ontology mapping. On the prediction results of individual methods, the method can well accumulate the correct matching of alone classifier. The experiments show that the approach achieves high accuracy on real-world domain.
Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this thesis ...chain connections are often hindered by inconsistent vocabularies , terms, ontologies, and semantics utilized by various supply chain members (Ye et...eg.COOPERATION PROCESS)-Don’t address synergies between SC X X X X X X X X Don’t track effectiveness over time-lack of learning by experience Loss of SC
One way of achieving interoperability among heterogeneous, distributed DBMSs is through a multidatabase system. Recently, there is an increasing use of CORBA implementation in developing multidatabase systems. Panorama is a multidatabase system that has been implemented on the top of CORBA compliant namely VisiBroker. It aims to achieve interoperability among Oracle, Sybase and other different DBMSs through the registration of these DBMSs to Panorama and through the single global query language -PanoSQL designed for this system. In this paper, we first introduce CORBA for the interoperability in multidatabase systems. Then, a general view for our designed multidatabase system, Panorama, has been given. In section four, we introduce the global query language -PanoSQL designed to achieve interoperability among the different DBMSs implemented in Panorama. Then, as an example, we present the registration of Oracle to Panorama in order to achieve interoperability in this system. And finally, a conclusion and the future work for this system have been given.
Juzwishin, Donald W M
Achieving effective health informatics interoperability in a fragmented and uncoordinated health system is by definition not possible. Interoperability requires the simultaneous integration of health care processes and information across different types and levels of care (systems thinking). The fundamental argument of this paper is that information system interoperability will remain an unfulfilled hope until health reforms effectively address the governance (accountability), structural and process barriers to interoperability of health care delivery. The ascendency of Web 2.0 and 3.0, although still unproven, signals the opportunity to accelerate patients' access to health information and their health record. Policy suggestions for simultaneously advancing health system delivery and information system interoperability are posited.
Reeta Sony A.L
Full Text Available Cloud Computing is a paradigm shift in the field of Computing. It is moving at an incredible fast pace and one of the fastest evolving domains of computer science today. It consist set of technology and service models that concentrates on the internet base use and delivery of IT applications, processing capability, storage and memory space. There is a shift from the traditional in-house servers and applications to the next generation of cloud computing applications. With many of the computer giants like Google, Microsoft, etc. entering into the cloud computing arena, there will be thousands of applications running on the cloud. There are several cloud environments available in the market today which support a huge consumer-base. Eventually this will lead to a multitude of standards, technologies and products being provided on the cloud. Consumers will need certain degrees of flexibility to use the cloud application/services of their choice and at the same time will need these applications/services to communicate with each other. This paper emphasizes cloud computing and provides a solution to achieve Interoperability, which is in the form of Web Services. The paper will also provide a Live Case Study where interoperability comes into play - Connecting Google App Engine and Microsoft Windows Azure Platform, two of the leading Cloud Platforms available today. GAE and WAP are two Cloud Frameworks which have very little in common, making interoperability an absolute necessary.
Full Text Available Inter-operability is a tool by which a link is established between systems, information and operations within an organization or organizations at a national, regional or even divisional level. Through mutual inter-operability, organizations could offer better quality of service at a lower cost. In order to inter-operate with one another and to share the needed information, organizations need to employ specific frameworks and models. The present paper intends to provide a framework for inter-operability among organizations using the experience gained by various countries in the area of E-Government inter-operability. The proposed framework encompasses various business, information, semantic and technical aspect. Each aspects in turn is comprised of different components. The Framework offered was validated using expert views.
Liu, Jing; Zhang, Yuan-Ting
While a traditional cuff-based Blood Pressure (BP) measuring device can only take a snap shot of BP, real-time and continuous measurement of BP without an occluding cuff is preferred which usually use the pulse transit time (PTT) in combination with other physiological parameters to estimate or track BP over a certain period of time after an initial calibration. This article discusses some perspectives of interoperability of wearable medical devices, based on IEEE P1708 draft standard that focuses on the objective performance evaluation of wearable cuffless BP measuring devices. The ISO/IEEE 11073 family of standards, supporting the plug-and play feature, is intended to enable medical devices to interconnect and interoperate with other medical devices and with computerized healthcare information systems in a manner suitable for the clinical environment. In this paper, the possible adoption of ISO/IEEE 11073 for the interoperability of wearable cuffless BP devices is proposed. In the consideration of the difference of the continuous and cuffless BP measuring methods from the conventional ones, the existing device specialization standards of ISO/IEEE 11073 cannot be directly followed when designing the cuffless BP device. Specifically, this paper discusses how the domain information model (DIM), in which vital sign information is abstracted as objects, is used to structure the information about the device and that generated from the device. Though attention should also be paid to adopt the communication standards for other parts for the communication system, applying communication standards that enable plug-and-play feature allows achieving the interoperability of different cuffless BP measuring devices with possible different configurations.
Marcos, Mar; Maldonado, Jose A; Martínez-Salvador, Begoña; Boscá, Diego; Robles, Montserrat
patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development.
Huang, Hong; Gong, Jianya
GML can only achieve geospatial interoperation at syntactic level. However, it is necessary to resolve difference of spatial cognition in the first place in most occasions, so ontology was introduced to describe geospatial information and services. But it is obviously difficult and improper to let users to find, match and compose services, especially in some occasions there are complicated business logics. Currently, with the gradual introduction of Semantic Web technology (e.g., OWL, SWRL), the focus of the interoperation of geospatial information has shifted from syntactic level to Semantic and even automatic, intelligent level. In this way, Geospatial Semantic Web (GSM) can be put forward as an augmentation to the Semantic Web that additionally includes geospatial abstractions as well as related reasoning, representation and query mechanisms. To advance the implementation of GSM, we first attempt to construct the mechanism of modeling and formal representation of geospatial knowledge, which are also two mostly foundational phases in knowledge engineering (KE). Our attitude in this paper is quite pragmatical: we argue that geospatial context is a formal model of the discriminate environment characters of geospatial knowledge, and the derivation, understanding and using of geospatial knowledge are located in geospatial context. Therefore, first, we put forward a primitive hierarchy of geospatial knowledge referencing first order logic, formal ontologies, rules and GML. Second, a metamodel of geospatial context is proposed and we use the modeling methods and representation languages of formal ontologies to process geospatial context. Thirdly, we extend Web Process Service (WPS) to be compatible with local DLL for geoprocessing and possess inference capability based on OWL.
Rui, Liu; Maode, Deng
Traditional e-learning platforms have the flaws that it's usually difficult to query or positioning, and realize the cross platform sharing and interoperability. In the paper, the semantic web and metadata standard is discussed, and a kind of e - learning system framework based on semantic web is put forward to try to solve the flaws of traditional elearning platforms.
The description of resources in game semantics has never achieved the simplicity and precision of linear logic, because of a misleading conception: the belief that linear logic is more primitive than game semantics. We advocate instead the contrary: that game semantics is conceptually more primitive than linear logic. Starting from this revised point of view, we design a categorical model of resources in game semantics, and construct an arena game model where the usual notion of bracketing is extended to multi- bracketing in order to capture various resource policies: linear, afﬁne and exponential.
Agostinho, Carlos; Jardim-Goncalves, Ricardo
Collaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property. When achieved seamlessly, efficiency is increased in the entire product life cycle. Nowadays, most organizations try to attain interoperability by establishing peer-to-peer mappings with the different partners, or in optimized networks, by using international standard models as the core for information exchange. In current industrial practice, mappings are only defined once, and the morphisms that represent them, are hardcoded in the enterprise systems. This solution has been effective for static environments, where enterprise and product models are valid for decades. However, with an increasingly complex and dynamic global market, models change frequently to answer new customer requirements. This paper draws concepts from the complex systems science and proposes a framework for sustainable systems interoperability in dynamic networks, enabling different organizations to evolve at their own rate.
M. K. Pawar
Full Text Available The objective of this work is to make interoperability of the distributed object based on CORBA middleware technology and standards. The distributed objects for the client-server technology are implemented in C#.Net framework and the Python language. The interoperability result shows the possibilities of application in which objects can communicate in different environment and different languages. It is also analyzing that how to achieve client-server communication in heterogeneous environment using the OmniORBpy IDL compiler and IIOP.NET IDLtoCLS mapping. The results were obtained that demonstrate the interoperability between .Net Framework and Python language. This paper also summarizes a set of fairly simple examples using some reasonably complex software tools.
Brandt, P.; Basten, T.; Stuijk, S.; Bui, V.; Clercq, P. de; Ferreira Pires, L.; Sinderen, M. van
Much effort has been spent on the optimization of sensor networks, mainly concerning their performance and power efficiency. Furthermore, open communication protocols for the exchange of sensor data have been developed and widely adopted, making sensor data widely available for software applications
Herklotz Air Force Office of Scientific Research firstname.lastname@example.org AFOSR AFRL-OSR-VA-TR-2012-1101 DISTRIBUTION STATEMENT A. Approved for...benefits. In Massachusetts, the Department of Elementary and Secondary Education ( DESE )2 is responsible for the education of the approximately 550,000...the DESE to track the progress of students as they advance through the grades. Moreover, it is necessary to address the needs of children in early
Moreira, João L.R.; Sinderen, van Marten; Ferreira Pires, Luis; Dockhorn Costa, Patrícia; Zelm, Martin; Doumeingts, Guy; Mendonca, Joao Pedro
Future disease outbreaks may spread faster and stronger than recent epidemics, such as Zika, Ebola and Influenza. The integration of multiple existing Early Warning Systems (EWS) is a requirement to support disease surveillance in combating infectious disease outbreaks. In this direction, numerous a
e.g., PASS, “multi-cast,” “swivel 22 Reference 11 27 chair,” “ sneaker -net”)23; this plethora...one IT system to another; “ sneaker -net” refers to transferring data from one information system to another using physically removable media such as
Oude Luttighuis, P.H.W.M.; Stap, R.E.; Quartel, D.
Conceptual information modeling is a well-established practice, aimed at preparing the implementation of information systems, the specification of electronic message formats, and the design of information processes. Today's ever more connected world however poses new challenges for conceptual inform
RTA Information Management Systems Branch is required for more than one copy to be made or an extract included in another publication. Requests to...vitale d’interopérabilité sémantique a été reconnue à maintes reprises et quelques projets de référence ont été créés mais d’une part, des recherches
Full Text Available Compatibility and interoperability of GNSS are the hot research issues in international satellite navigation field. It is a requirement for integrated multi GNSS navigation and positioning. The basic concepts of the compatibility and interoperability are introduced and the trend of the interoperability among the GNSS providers is discussed. The status and problems of the frequency interoperability of GPS, BeiDou(BDS, GLONASS and Galileo are analyzed. It is pointed that the frequency interoperability problems will affect the manufacturers and multi GNSS users. The influences of the interoperability problems of the reference coordinate systems are not only resulted from the definitions and realizations of the reference coordinate systems but also from the maintenance and update strategies of the reference systems. The effects of the time datum interoperability and corresponding resolving strategies are also discussed. The influences of the interoperability problems of GNSS are summarized.
Demchenko, Y.; Makkes, M.X.; Strijkers, R.J.; Ngo, C.
This paper presents on-going research to develop the Intercloud Architecture (ICA) Framework that should address problems in multi-provider multi-domain heterogeneous Cloud based infrastructure services and applications integration and interoperability, including integration and interoperability wit
It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given
Feng, Xiaobing; Hu, Haibo
To control counterparty risk, financial regulations such as the Dodd-Frank Act are increasingly requiring standardized derivatives trades to be cleared by central counterparties (CCPs). It is anticipated that in the near term future, CCPs across the world will be linked through interoperability agreements that facilitate risk sharing but also serve as a conduit for transmitting shocks. This paper theoretically studies a networked network with CCPs that are linked through interoperability arrangements. The major finding is that the different configurations of networked network CCPs contribute to the different properties of the cascading failures.
Boza Garcia, Andres; Cuenca, L; Poler Escoto, Raúl; Michaelides, Zenon
Enterprise resource planning (ERP) systems participate in interoperability projects and this participation sometimes leads to new proposals for the ERP field. The aim of this paper is to identify the role that interoperability plays in the evolution of ERP systems. To go about this, ERP systems have been first identified within interoperability frameworks. Second, the initiatives in the ERP field driven by interoperability requirements have been identified from two perspectives: technological...
Folmer, E.J.A.; Krukkert, D.
Interoperability is of major importance in B2B environments. Starting with EDI in the ‘80s, currently interoperability relies heavily on XMLbased standards. Although having great impact, still issues remain to be solved for improving B2B interoperability. These issues include lack of dynamics, cost
Wilkinson, Mark D; Senger, Martin; Kawas, Edward; Bruskiewich, Richard; Gouzy, Jerome; Noirot, Celine; Bardou, Philippe; Ng, Ambrose; Haase, Dirk; Saiz, Enrique de Andres; Wang, Dennis; Gibbons, Frank; Gordon, Paul M K; Sensen, Christoph W; Carrasco, Jose Manuel Rodriguez; Fernández, José M; Shen, Lixin; Links, Matthew; Ng, Michael; Opushneva, Nina; Neerincx, Pieter B T; Leunissen, Jack A M; Ernst, Rebecca; Twigger, Simon; Usadel, Bjorn; Good, Benjamin; Wong, Yan; Stein, Lincoln; Crosby, William; Karlsson, Johan; Royo, Romina; Párraga, Iván; Ramírez, Sergio; Gelpi, Josep Lluis; Trelles, Oswaldo; Pisano, David G; Jimenez, Natalia; Kerhornou, Arnaud; Rosset, Roman; Zamacola, Leire; Tarraga, Joaquin; Huerta-Cepas, Jaime; Carazo, Jose María; Dopazo, Joaquin; Guigo, Roderic; Navarro, Arcadi; Orozco, Modesto; Valencia, Alfonso; Claros, M Gonzalo; Pérez, Antonio J; Aldana, Jose; Rojano, M Mar; Fernandez-Santa Cruz, Raul; Navas, Ismael; Schiltz, Gary; Farmer, Andrew; Gessler, Damian; Schoof, Heiko; Groscurth, Andreas
The BioMoby project was initiated in 2001 from within the model organism database community. It aimed to standardize methodologies to facilitate information exchange and access to analytical resources, using a consensus driven approach. Six years later, the BioMoby development community is pleased to announce the release of the 1.0 version of the interoperability framework, registry Application Programming Interface and supporting Perl and Java code-bases. Together, these provide interoperable access to over 1400 bioinformatics resources worldwide through the BioMoby platform, and this number continues to grow. Here we highlight and discuss the features of BioMoby that make it distinct from other Semantic Web Service and interoperability initiatives, and that have been instrumental to its deployment and use by a wide community of bioinformatics service providers. The standard, client software, and supporting code libraries are all freely available at http://www.biomoby.org/.
Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar
In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.
Full Text Available The Open Health Tools initiative is creating an ecosystem focused on the production of software tooling that promotes the exchange of medical information across political, geographic, cultural, product, and technology lines. At its core, OHT believes that the availability of high-quality tooling that interoperates will propel the industry forward, enabling organizations and vendors to build products and systems that effectively work together. This will ?raise the interoperability bar? as a result of having tools that just work. To achieve these lofty goals, careful consideration must be made to the constituencies that will be most affected by an OHT-influenced world. This document outlines a vision of OHT?s impact to these stakeholders. It does not explain the OHT process itself or how the OHT community operates. Instead, we place emphasis on the impact of that process within the health industry. The catchphrase ?code is king? underpins this document, meaning that the manifestation of any open source community lies in the products and technology it produces.
Wicaksana, I Wayan Simri
Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this aper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
Understanding natural language is a cognitive, information-driven process. Discussing some of the consequences of this fact, the paper offers a novel look at the semantic effect of lexical nouns and the identification of reference types.......Understanding natural language is a cognitive, information-driven process. Discussing some of the consequences of this fact, the paper offers a novel look at the semantic effect of lexical nouns and the identification of reference types....
Boza, Andrés; Cuenca, Llanos; Poler, Raúl; Michaelides, Zenon
Enterprise resource planning (ERP) systems participate in interoperability projects and this participation sometimes leads to new proposals for the ERP field. The aim of this paper is to identify the role that interoperability plays in the evolution of ERP systems. To go about this, ERP systems have been first identified within interoperability frameworks. Second, the initiatives in the ERP field driven by interoperability requirements have been identified from two perspectives: technological and business. The ERP field is evolving from classical ERP as information system integrators to a new generation of fully interoperable ERP. Interoperability is changing the way of running business, and ERP systems are changing to adapt to the current stream of interoperability.
Full Text Available The Internet of Things is rapidly developing in recent years. A number of devices connect with the Internet. Hence, the interoperability, which is the access and interpretation of unambiguous data, is strongly needed by distributed and heterogeneous devices. The semantics promotes the interoperability in the Internet of Things using ontology to provide precise definition of concepts and relations. In this paper, we demonstrate the importance of the semantics in three aspects: firstly, the semantics means that the machines could understand and respond to the human command. Secondly, the semantics is mainly reflected in the ontology of the physical world. Thirdly, the semantics is important for interoperability, data integration and reasoning. Then, we introduce a semantic approach to construct an environment observation system in the Internet of Things. The environment observation system contains three major components, the first is the ontology of the environment observation system, the second is the semantic map of the environment observation system, and the last is exposing the observation data. Finally, the system publishes the environment observation data on the Web successfully. The environment observation system with semantics provides a better service in the Internet of Things
Skoog, A I; McBarron JW 2nd; Severin, G I
The European Agency (ESA) and the Russian Space Agency (RKA) are jointly developing a new space suit system for improved extravehicular activity (EVA) capabilities in support of the MIR Space Station Programme, the EVA Suit 2000. Recent national policy agreements between the U.S. and Russia on planned cooperations in manned space also include joint extravehicular activity (EVA). With an increased number of space suit systems and a higher operational frequency towards the end of this century an improved interoperability for both routine and emergency operations is of eminent importance. It is thus timely to report the current status of ongoing work on international EVA interoperability being conducted by the Committee on EVA Protocols and Operations of the International Academy of Astronauts initiated in 1991. This paper summarises the current EVA interoperability issues to be harmonised and presents quantified vehicle interface requirements for the current U.S. Shuttle EMU and Russian MIR Orlan DMA and the new European/Russian EVA Suit 2000 extravehicular systems. Major critical/incompatible interfaces for suits/mother-craft of different combinations are discussed, and recommendations for standardisations given.
Skoog, A. Ingemar; McBarron, James W.; Severin, Guy I.
The European Agency (ESA) and the Russian Space Agency (RKA) are jointly developing a new space suit system for improved extravehicular activity (EVA) capabilities in support of the MIR Space Station Programme, the EVA Suit 2000. Recent national policy agreements between the U.S. and Russia on planned cooperations in manned space also include joint extravehicular activity (EVA). With an increased number of space suit systems and a higher operational frequency towards the end of this century an improved interoperability for both routine and emergency operations is of eminent importance. It is thus timely to report the current status of ongoing work on international EVA interoperability being conducted by the Committee on EVA Protocols and Operations of the International Academy of Astronautics initialed in 1991. This paper summarises the current EVA interoperability issues to be harmonised and presents quantified vehicle interface requirements for the current U.S. Shuttle EMU and Russian MIR Orlan DMA and the new European/Russian EVA Suit 2000 extravehicular systems. Major critical/incompatible interfaces for suits/mothercraft of different combinations arc discussed, and recommendations for standardisations given.
Cox, Simon; Mills, Katie; Tan, Florence
Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3  accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) , which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their
Full Text Available An issue of testing that should be taken into consideration is the compatibility and interoperability of the IPsec components when implementing an IPsec solution. This article will guide us trough some key point introductive notions involved in the interoperability problem, we’ll see a short overview of some of these problems and afterwards we will discuss about some of the testing solutions of IPsec interoperability that we should take into consideration.
Dickens, with his adeptness with language, applies semantic deviation skillfully in his realistic novel Oliver Twist. However, most studies and comments home and abroad on it mainly focus on such aspects as humanity, society, and characters. Therefore, this thesis will take a stylistic approach to Oliver Twist from the perspective of semantic deviation, which is achieved by the use of irony, hyperbole, and pun and analyze how the application of the technique makes the novel attractive.
Bagha, Karim Nazari
Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…
Radenković, Sonja; Krdžavac, Nenad; Devedžić, Vladan
This paper presents a way to develop a modern assessment system on the Semantic Web. The system is based on the IMS QTI standard (question and test interoperability) and designed by applying the model driven architecture software engineering standards. It uses the XML meta-data interchange specification and ontologies. We propose the framework for assessment systems that is reusable, extensible, and that facilitates interoperability between its component systems. The central idea here is using description logic reasoning techniques for intelligent analysis of students' solutions of the problems they are working on during assessment sessions with the system, in order to process open-ended questions. This innovative approach can be applied in the IMS QTI standard.
In practical terms, protocol interoperability testing is still laborious and error-prone with little effect, even for those products that have passed conformance testing. Deadlock and unsymmetrical data communication are familiar in interoperability testing, and it is always very hard to trace their causes. The previous work has not provided a coherent way to analyze why the interoperability was broken among protocol implementations under test. In this paper, an alternative approach is presented to analyzing these problems from a viewpoint of implementation structures. Sequential and concurrent structures are both representative implementation structures, especially in event-driven development model. Our research mainly discusses the influence of sequential and concurrent structures on interoperability, with two instructive conclusions: (a) a sequential structure may lead to deadlock; (b) a concurrent structure may lead to unsymmetrical data communication. Therefore, implementation structures carry weight on interoperability, which may not gain much attention before. To some extent, they are decisive on the result of interoperability testing. Moreover, a concurrent structure with a sound task-scheduling strategy may contribute to the interoperability of a protocol implementation. Herein model checking technique is introduced into interoperability analysis for the first time. As the paper shows, it is an effective way to validate developers' selections on implementation structures or strategies.
Bonino, Dario; Corno, Fulvio
This paper introduces an ontology-based model for domotic device inter-operation. Starting from a previously published ontology (DogOnt) a refactoring and extension is described allowing to explicitly represent device capabilities, states and commands, and supporting abstract modeling of device inter-operation.
Kalb, Hendrik; Lazaridou, Paraskevi; Pinsent, Edward;
The interoperability of web archives and digital libraries is crucial to avoid silos of preserved data and content. While various researches focus on specfic facets of the challenge to interoperate, there is a lack of empirical work about the overall situation of actual challenges. We conduct...
Madureira, A.; Den Hartog, F.; Silva, E.; Baken, N.
Interoperability refers to the ability of two or more systems or components to exchange information and to use the information that has been exchanged. The importance of interoperability has grown together with the adoption of Digital Information Networks (DINs). DINs refer to information networks s
Full Text Available The OGC Interoperability Program is a source of innovation in the development of open standards. The approach to innovation is based on hands-on; collaborative engineering leading to more mature standards and implementations. The process of the Interoperability Program engages a community of sponsors and participants based on an economic model that benefits all involved. Each initiative begins with an innovative approach to identify interoperability needs followed by agile software development to advance the state of technology to the benefit of society. Over eighty initiatives have been conducted in the Interoperability Program since the breakthrough Web Mapping Testbed began the program in 1999. OGC standards that were initiated in Interoperability Program are the basis of two thirds of the certified compliant products.
Full Text Available The Internet of Things (IoT allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.
Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok
The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.
We first present our work in machine translation, during which we used aligned sentences to train a neural network to embed n-grams of different languages into an $d$-dimensional space, such that n-grams that are the translation of each other are close with respect to some metric. Good n-grams to n-grams translation results were achieved, but full sentences translation is still problematic. We realized that learning semantics of sentences and documents was the key for solving a lot of natural language processing problems, and thus moved to the second part of our work: sentence compression. We introduce a flexible neural network architecture for learning embeddings of words and sentences that extract their semantics, propose an efficient implementation in the Torch framework and present embedding results comparable to the ones obtained with classical neural language models, while being more powerful.
Paul J. E. Dekker
Full Text Available In the last decade the enterprise of formal semantics has been under attack from several philosophical and linguistic perspectives, and it has certainly suffered from its own scattered state, which hosts quite a variety of paradigms which may seem to be incompatible. It will not do to try and answer the arguments of the critics, because the arguments are often well-taken. The negative conclusions, however, I believe are not. The only adequate reply seems to be a constructive one, which puts several pieces of formal semantics, in particular dynamic semantics, together again. In this paper I will try and sketch an overview of tasks, techniques, and results, which serves to at least suggest that it is possible to develop a coherent overall picture of undeniably important and structural phenomena in the interpretation of natural language. The idea is that the concept of meanings as truth conditions after all provides an excellent start for an integrated study of the meaning and use of natural language, and that an extended notion of goal directed pragmatics naturally complements this picture. None of the results reported here are really new, but we think it is important to re-collect them.ReferencesAsher, Nicholas & Lascarides, Alex. 1998. ‘Questions in Dialogue’. Linguistics and Philosophy 23: 237–309.http://dx.doi.org/10.1023/A:1005364332007Borg, Emma. 2007. ‘Minimalism versus contextualism in semantics’. In Gerhard Preyer & Georg Peter (eds. ‘Context-Sensitivity and Semantic Minimalism’, pp. 339–359. Oxford: Oxford University Press.Cappelen, Herman & Lepore, Ernest. 1997. ‘On an Alleged Connection between Indirect Quotation and Semantic Theory’. Mind and Language 12: pp. 278–296.Cappelen, Herman & Lepore, Ernie. 2005. Insensitive Semantics. Oxford: Blackwell.http://dx.doi.org/10.1002/9780470755792Dekker, Paul. 2002. ‘Meaning and Use of Indefinite Expressions’. Journal of Logic, Language and Information 11: pp. 141–194
Sara Saad Soliman
Full Text Available This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.
Soliman, Sara Saad; El-Sayed, Maged F; Hassan, Yasser F
This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.
Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.
LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.
Fonou-Dombeu, Jean Vincent; 10.5121/ijwest.2011.2401
Electronic government (e-government) has been one of the most active areas of ontology development during the past six years. In e-government, ontologies are being used to describe and specify e-government services (e-services) because they enable easy composition, matching, mapping and merging of various e-government services. More importantly, they also facilitate the semantic integration and interoperability of e-government services. However, it is still unclear in the current literature how an existing ontology building methodology can be applied to develop semantic ontology models in a government service domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic ontology models in a government service domain. Firstly, the Uschold and King methodology is presented, discussed and applied to build a government domain ontology. Secondly, the domain ontology is evaluated for semantic consistency using its semi-formal representation in Description Logic. Thirdly, an...
Niejahr, J. [Siemens Canada Ltd., Mississauga, ON (Canada); Englert, H.; Dawidczak, H. [Siemens AG, Munich (Germany)
The worldwide established communication standard for power utility automation is the International Electrotechnical Commission (IEC) 61850. The key drivers for its use are performance, reduced life-cycle costs and interoperability. The major application of IEC 61850 is in substation automation, where practical experience from thousands of installations has been realized. Most of these installations are primarily single-vendor solutions with some special devices from other vendors, while only a few are full multivendor systems. These multivendor projects showed that the interoperability capabilities of the available products and systems are currently limited, requiring additional engineering efforts. This paper provided a definition of interoperability in the context of IEC 61850 and discussed the experiences collected in multivendor projects and interoperability tests. It identified the technical reasons for limited interoperability. In order to help overcome the interoperability limitations and allow the exchange of devices with a minimum of re-engineering, a new concept for flexible IEC 61850 data modeling was also presented. Recommendations were offered as to how this concept could be applied in practice in order to avoid additional engineering costs. It was concluded that the new concept for flexible adaption of IEC 61850 data models and communication services improved the interoperability of products and systems regarding simplicity and functionality. 10 refs., 4 figs.
Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...
Lynnes, C.; Leptoukh, G.; Berrick, S.; Shen, S.; Prados, A.; Fox, P.; Yang, W.; Min, M.; Holloway, D.; Enloe, Y.
As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This implies a need for deeper data interoperability than we have now. Many efforts (e.g. OPeNDAP, Open Geospatial Consortium) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross- calibrated, validated, inter-compared and fused. We must determine how to match up data sets that are related, yet different in significant ways: the exact nature of the phenomenon being measured, measurement technique, exact location in space-time, or the quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, the results can be meaningless or even lead to an incorrect interpretation of the data. Most of these distinctions trace back to how the data came to be: sensors, processing, and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio-temporal aggregation, sampling issues, sensor biases, algorithm differences and/or calibration issues. This provenance information must therefore be captured in a semantic framework that allows sophisticated data inter-use tools to incorporate it, and eventually aid in the interpretation of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representations, and data quality representation in a well-structured, machine- readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance- related distinctions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance
framework: 17 Table 4. LISI Reference Model (Ford, 2008) Referencing back to Ford‘s method and the standard definition of interoperability...using the method developed by Dr. Thomas Ford, where higher levels of interoperability maturity will result in a higher interoperability score...7 Ford‘s Interoperability Measurement Method
The research presented in this dissertation concerns the identification of problems and provision of solutions for increasing the degree of interoperability between CAD, CACSD (Computer Aided Control Systems Design) and CAR (Computer Aided Robotics) in Computer Integrated Manufacturing and Engine......The research presented in this dissertation concerns the identification of problems and provision of solutions for increasing the degree of interoperability between CAD, CACSD (Computer Aided Control Systems Design) and CAR (Computer Aided Robotics) in Computer Integrated Manufacturing......· The development of a STEP based interface for general control system data and functions, especially related to robot motion control for interoperability of CAD, CACSD, and CAR systems for the extension of the inter-system communication capabilities beyond the stage achieved up to now.This interface development...... comprehends the following work:· The definition of the concepts of 'information' and 'information model', and the selection of a proper information modeling methodology within the STEP methodologies.· The elaboration of a general function model of a generic robot motion controller in IDEF0 for interface...
Narock, Thomas William
The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…
Full Text Available Interoperability is a requirement for the successful deployment of Electronic Health Records (EHR. EHR improves the quality of healthcare by enabling access to all relevant information at the diagnostic decision moment, regardless of location. It is a system that results from the cooperation of several heterogeneous distributed subsystems that need to successfully exchange information relative to a specific healthcare process. This paper analyzes interoperability impediments in healthcare by first defining them and providing concrete healthcare examples, followed by discussion of how specifications can be defined and how verification can be conducted to eliminate those impediments and ensure interoperability in healthcare. This paper also analyzes how Integrating the Healthcare Enterprise (IHE has been successful in enabling interoperability, and identifies some neglected aspects that need attention.
Jardim-Gonçalves, Ricardo; Popplewell, Keith; Mendonça, João
A concise reference to the state of the art in systems interoperability, Enterprise Interoperability VII will be of great value to engineers and computer scientists working in manufacturing and other process industries and to software engineers and electronic and manufacturing engineers working in the academic environment. Furthermore, it shows how knowledge of the meaning within information and the use to which it will be put have to be held in common between enterprises for consistent and efficient inter-enterprise networks. Over 30 papers, ranging from academic research through case studies to industrial and administrative experience of interoperability show how, in a scenario of globalised markets, where the capacity to cooperate with other organizations efficiently is essential in order to remain economically, socially and environmentally cost-effective, the most innovative digitized and networked enterprises ensure that their systems and applications are able to interoperate across heterogeneous collabo...
Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.J.
This report presents on-going research to develop the Intercloud Architecture Framework (ICAF) that addresses interoperability and integration issues in multi-provider multi-domain heterogeneous Cloud based infrastructure services and applications provisioning, including integration and interoperabi
Kargakis, Yannis; Tzitzikas, Yannis; van Horik, M.P.M.
This paper presents Epimenides, a system that implements a novel interoperability dependency reasoning approach for assisting digital preservation activities. A distinctive feature is that it can model also converters and emulators, and the adopted modelling approach enables the automatic reasoning
Doumeingts, Guy; Katzy, Bernhard; Chalmeta, Ricardo
Within a scenario of globalised markets, where the capacity to efficiently cooperate with other firms starts to become essential in order to remain in the market in an economically, socially and environmentally cost-effective manner, it can be seen how the most innovative enterprises are beginning to redesign their business model to become interoperable. This goal of interoperability is essential, not only from the perspective of the individual enterprise but also in the new business structures that are now emerging, such as supply chains, virtual enterprises, interconnected organisations or extended enterprises, as well as in mergers and acquisitions. Composed of over 40 papers, Enterprise Interoperability V ranges from academic research through case studies to industrial and administrative experience of interoperability. The international nature of the authorship contnues to broaden. Many of the papers have examples and illustrations calculated to deepen understanding and generate new ideas. The I-ESA'12 Co...
Marilda Lopes Ginez de Lara
Full Text Available The aim of this study was to discuss the need for formal documentary languages as a condition for it to function in the Semantic Web. Based on a bibliographic review, Linked Open Data is presented as an initial condition for the operationalization of the Semantic Web, similar to the movement of Linked Open Vocabularies that aimed to promote interoperability among vocabularies. We highlight the Simple Knowledge Organization System format by analyzing its main characteristics and presenting the new standard ISO 25964-1/2:2011/2012 -Thesauri and interoperability with other vocabularies, that revises previous recommendations, adding requirements for the interoperability and mapping of vocabularies. We discuss conceptual problems in the formalization of vocabularies and the need to invest critically in its operationalization, suggesting alternatives to harness the mapping of vocabularies.
Richardson, David; Nyenhuis, Michael; Zsoter, Ervin; Pappenberger, Florian
"Understanding the Earth system — its weather, climate, oceans, atmosphere, water, land, geodynamics, natural resources, ecosystems, and natural and human-induced hazards — is crucial to enhancing human health, safety and welfare, alleviating human suffering including poverty, protecting the global environment, reducing disaster losses, and achieving sustainable development. Observations of the Earth system constitute critical input for advancing this understanding." With this in mind, the Group on Earth Observations (GEO) started implementing the Global Earth Observation System of Systems (GEOSS). GEOWOW, short for "GEOSS interoperability for Weather, Ocean and Water", is supporting this objective. GEOWOW's main challenge is to improve Earth observation data discovery, accessibility and exploitability, and to evolve GEOSS in terms of interoperability, standardization and functionality. One of the main goals behind the GEOWOW project is to demonstrate the value of the TIGGE archive in interdisciplinary applications, providing a vast amount of useful and easily accessible information to the users through the GEO Common Infrastructure (GCI). GEOWOW aims at developing funcionalities that will allow easy discovery, access and use of TIGGE archive data and of in-situ observations, e.g. from the Global Runoff Data Centre (GRDC), to support applications such as river discharge forecasting.TIGGE (THORPEX Interactive Grand Global Ensemble) is a key component of THORPEX: a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. The TIGGE archive consists of ensemble weather forecast data from ten global NWP centres, starting from October 2006, which has been made available for scientific research. The TIGGE archive has been used to analyse hydro-meteorological forecasts of flooding in Europe as well as in China. In general the analysis has been favourable in terms of
Pesquer, Lluís; Masó, Joan; Stasch, Christoph
There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture
Full Text Available Cloud computing has been one of the latest technologies which assures reliable delivery of on - demand computing services over the Internet. Cloud service providers have established geographically distributed data centers and computing resources, which are available online as service. The clouds operated by different service providers working together in collaboration can open up lots more spaces for innovative scenarios with huge amount of resources provisioning on demand. However, current cloud systems do not support intercloud interoperability. This paper is thus motivated to address Intercloud Interoperabilityby analyzing different methodologies that have been applied to resolve various scenarios of interoperability. Model Driven Architecture (MDA and Service Oriented Architecture (SOA method have been used to address interoperability in various scenarios, which also opens up spaces to address intercloud interoperability by making use of these well accepted methodologies. The focus of this document is to show Intercloud Interoperability can be supported through a Model Driven approach and Service Oriented systems. Moreover, the current state of the art in Intercloud, concept and benefits of MDA and SOA are discussed in the paper. At the same time this paper also proposes a generic architecture for MDA - SOA based framework, which can be useful for developing applications which will require intercloud interoperability. The paper justi fies the usability of the framework by a use - case scenario for dynamic workload migration among heterogeneous clouds.
Yuan, Ying; Mei, Kun; Bian, Fuling
With the growth of the World Wide Web technologies, the access to and use of geospatial information changed in the past decade radically. Previously, the data processed by a GIS as well as its methods had resided locally and contained information that was sufficiently unambiguous in the respective information community. Now, both data and methods may be retrieved and combined from anywhere in the world, escaping their local contexts. The last few years have seen a growing interest in the field of semantic geospatial web. With the development of semantic web technologies, we have seen the possibility of solving the heterogeneity/interoperation problem in the GIS community. The semantic geospatial web application can support a wide variety of tasks including data integration, interoperability, knowledge reuse, spatial reasoning and many others. This paper proposes a flexible framework called GeoSWF (short for Geospatial Semantic Web Framework), which supports the semantic integration of the distributed and heterogeneous geospatial information resources and also supports the semantic query and spatial relationship reasoning. We design the architecture of GeoSWF by extending the MVC Pattern. The GeoSWF use the geo-2007.owl proposed by W3C as the reference ontology of the geospatial information and design different application ontologies according to the situation of heterogeneous geospatial information resources. A Geospatial Ontology Creating Algorithm (GOCA) is designed for convert the geospatial information to the ontology instances represented by RDF/OWL. On the top of these ontology instances, the GeoSWF carry out the semantic reasoning by the rule set stored in the knowledge base to generate new system query. The query result will be ranking by ordering the Euclidean distance of each ontology instances. At last, the paper gives the conclusion and future work.
Cortez and Pizarro could not have conquered Mexico and Peru without the wise use of alliances with native tribes. On the other hand, the sixteenth and...Storage Reaction Defense Bases and Supply Forces Headquarters Units 4th RRB - 21st TDH -4th TAB Transportation -22nd TDH - 32nd TAB Units -23rd DH...33rdHAB Reaction 24th TDH 34th AFTB Forces 25th TDH 6th TrAB Repair and-26th TDHManenc 27th TDH SAM UnitsBrigades and -7th MB Mobilization Regiments
Das, Sudeshna; Girard, Lisa; Green, Tom; Weitzman, Louis; Lewis-Bowen, Alister; Clark, Tim
Web-based biomedical communities are becoming an increasingly popular vehicle for sharing information amongst researchers and are fast gaining an online presence. However, information organization and exchange in such communities is usually unstructured, rendering interoperability between communities difficult. Furthermore, specialized software to create such communities at low cost-targeted at the specific common information requirements of biomedical researchers-has been largely lacking. At the same time, a growing number of biological knowledge bases and biomedical resources are being structured for the Semantic Web. Several groups are creating reference ontologies for the biomedical domain, actively publishing controlled vocabularies and making data available in Resource Description Framework (RDF) language. We have developed the Science Collaboration Framework (SCF) as a reusable platform for advanced structured online collaboration in biomedical research that leverages these ontologies and RDF resources. SCF supports structured 'Web 2.0' style community discourse amongst researchers, makes heterogeneous data resources available to the collaborating scientist, captures the semantics of the relationship among the resources and structures discourse around the resources. The first instance of the SCF framework is being used to create an open-access online community for stem cell research-StemBook (http://www.stembook.org). We believe that such a framework is required to achieve optimal productivity and leveraging of resources in interdisciplinary scientific research. We expect it to be particularly beneficial in highly interdisciplinary areas, such as neurodegenerative disease and neurorepair research, as well as having broad utility across the natural sciences.
Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.
The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.
Full Text Available Towards Interoperable Preservation Repositories (TIPR is a project funded by the Institute of Museum and Library Services to create and test a Repository eXchange Package (RXP. The package will make it possible to transfer complex digital objects between dissimilar preservation repositories. For reasons of redundancy, succession planning and software migration, repositories must be able to exchange copies of archival information packages with each other. Every different repository application, however, describes and structures its archival packages differently. Therefore each system produces dissemination packages that are rarely understandable or usable as submission packages by other repositories. The RXP is an answer to that mismatch. Other solutions for transferring packages between repositories focus either on transfers between repositories of the same type, such as DSpace-to-DSpace transfers, or on processes that rely on central translation services. Rather than build translators between many dissimilar repository types, the TIPR project has defined a standards-based package of metadata files that can act as an intermediary information package, the RXP, a lingua franca all repositories can read and write.
Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja
eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.
M.Petrov "Event-Driven Interoperability Framework For Interoperation In E-Learning Information Systems - Monitored Repository", IADAT-e2006, 3rd International Conference on Education, Barcelona (Spain), July 12-14, 2006, ISBN: 84-933971-9-9, pp.198 - pp.202
Bénaben, Frédérick; Poler, Raúl; Bourrières, Jean-Paul
A concise reference to the state of the art in systems interoperability, Enterprise Interoperability VI will be of great value to engineers and computer scientists working in manufacturing and other process industries and to software engineers and electronic and manufacturing engineers working in the academic environment. Over 40 papers, ranging from academic research through case studies to industrial and administrative experience of interoperability show how, in a scenario of globalised markets, where the capacity to cooperate with other firms efficiently starts to become essential in order to remain in the market in an economically, socially and environmentally cost-effective manner, the most innovative enterprises are beginning to redesign their business model to become interoperable. This goal of interoperability is essential, not only from the perspective of the individual enterprise but also in the new business structures that are now emerging, such as supply chains, virtual enterprises, interconnected...
Lutz, M.; Portele, C.; Cox, S.; Murray, K.
external vocabulary. In the former case, for each value, an external identifier, one or more labels (possibly in different languages), a definition and other metadata should be specified. In the latter case, the external vocabulary should be characterised, e.g. by specifying the version to be used, the format(s) in which the vocabulary is available, possible constraints (e.g. if only as specific part of the external list is to be used), rules for using values in the encoding of instance data, and the maintenance rules applied to the external vocabulary. This information is crucial for enabling implementation and interoperability in distributed systems (such as SDIs) and should be made available through a code list registry. While thus the information on allowed code list values is usually managed outside the UML application schema, we recommend inclusion of «codeList»-stereotyped classes in the model for semantic clarity. Information on the obligation, extensibility and a reference to the specified values should be provided through tagged values. Acknowledgements: The authors would like to thank the INSPIRE Thematic Working Groups, the Data Specifications Drafting Team and the JRC Contact Points for their contributions to the discussions on code lists in INSPIRE and to this abstract.
Cheung David W
Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates
Business collaboration networks provide collaborative organizations a favorable context for automated business process interoperability. This paper aims to present a novel approach for assessing interoperability of process driven services by considering the three main aspects of interoperation: potentiality, compatibility and operational performance. It presents also a software tool that supports the proposed assessment method. In addition to its capacity to track and control the evolution of interoperation degree in time, the proposed tool measures the required effort to reach a planned degree of interoperability. Public accounting of financial authority is given as an illustrative case study of interoperability monitoring in public collaboration network.
Full Text Available Business collaboration networks provide collaborative organizations a favorable context for automated business process interoperability. This paper aims to present a novel approach for assessing interoperability of process driven services by considering the three main aspects of interoperation: potentiality, compatibility and operational performance. It presents also a software tool that supports the proposed assessment method. In addition to its capacity to track and control the evolution of interoperation degree in time, the proposed tool measures the required effort to reach a planned degree of interoperability. Public accounting of financial authority is given as an illustrative case study of interoperability monitoring in public collaboration network.
Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.
Full Text Available For a best collaboration between tutor and learner and relatively to discussion forum, we have proposed to tutor a semantic classification tool of messages, which helps him to manage the mass of messages accumulating during the time. The tool provides a semantic classification mechanism based on a chosen theme. For a classification more intelligent semantically, and focusing more the chosen theme, our tool incorporates essentially a formal OWL ontology. The reuse and interoperability offered by ontology remain restrictive in the tool's knowledge base. To overcome this limitation, the improvement of the SOA architecture already proposed will be presented in this paper. An implementation of our classifier using the composite application concept will also be explained. The respect of standards: XML, SOAP, WSDL and BPEL in our implementation, will guarantee the tool's interoperability with platforms which solicit its classification service, while allowing its reuse with a high degree of granularity.
Hobona, G.; Bermudez, L. E.; Brackin, R.
A gazetteer is a geographical directory containing some information regarding places. It provides names, location and other attributes for places which may include points of interest (e.g. buildings, oilfields and boreholes), and other features. These features can be published via web services conforming to the Gazetteer Application Profile of the Web Feature Service (WFS) standard of the Open Geospatial Consortium (OGC). Against the backdrop of advances in geophysical surveys, there has been a significant increase in the amount of data referenced to locations. Gazetteers services have played a significant role in facilitating access to such data, including through provision of specialized queries such as text, spatial and fuzzy search. Recent developments in the OGC have led to advances in gazetteers such as support for multilingualism, diacritics, and querying via advanced spatial constraints (e.g. search by radial search and nearest neighbor). A challenge remaining however, is that gazetteers produced by different organizations have typically been modeled differently. Inconsistencies from gazetteers produced by different organizations may include naming the same feature in a different way, naming the attributes differently, locating the feature in a different location, and providing fewer or more attributes than the other services. The Gazetteer application profile of the WFS is a starting point to address such inconsistencies by providing a standardized interface based on rules specified in ISO 19112, the international standard for spatial referencing by geographic identifiers. The profile, however, does not provide rules to deal with semantic inconsistencies. The USGS and NGA commissioned research into the potential for a Single Point of Entry Global Gazetteer (SPEGG). The research was conducted by the Cross Community Interoperability thread of the OGC testbed, referenced OWS-9. The testbed prototyped approaches for brokering gazetteers through use of semantic
P. A. Sundararajan
Full Text Available The authors propose a semantic ontology–driven enterprise data–model architecture for interoperability, integration, and adaptability for evolution, by autonomic agent-driven intelligent design of logical as well as physical data models in a heterogeneous distributed enterprise through its life cycle. An enterprise-standard ontology (in Web Ontology Language [OWL] and Semantic Web Rule Language [SWRL] for data is required to enable an automated data platform that adds life-cycle activities to the current Microsoft Enterprise Search and extend Microsoft SQL Server through various engines for unstructured data types, as well as many domain types that are configurable by users through a Semantic- query optimizer, and using Microsoft Office SharePoint Server (MOSS as a content and metadata repository to tie all these components together.
In health care, interoperability--the ability of healthcare information systems to work together and share information within and across organizational boundaries--involves: Data exchange, Infrastructure interoperability, User interface interoperability, Process interoperability.
Segaran, Toby; Taylor, Jamie
With this book, the promise of the Semantic Web -- in which machines can find, share, and combine data on the Web -- is not just a technical possibility, but a practical reality Programming the Semantic Web demonstrates several ways to implement semantic web applications, using current and emerging standards and technologies. You'll learn how to incorporate existing data sources into semantically aware applications and publish rich semantic data. Each chapter walks you through a single piece of semantic technology and explains how you can use it to solve real problems. Whether you're writing
Full Text Available It has been proposed that Semantic Web technologies would be key enablers in achieving context-aware computing in our everyday environments. In our vision of semantic technology empowered smart spaces, the whole interaction model is based on the sharing of semantic data via common blackboards. This approach allows smart space applications to take full advantage of semantic technologies. Because of its novelty, there is, however, a lack of solutions and methods for developing semantic smart space applications according to this vision. In this paper, we present solutions to the most relevant challenges we have faced when developing context-aware computing in smart spaces. In particular the paper describes (1 methods for utilizing semantic technologies with resource restricted-devices, (2 a solution for identifying real world objects in semantic technology empowered smart spaces, (3 a method for users to modify the behavior of context-aware smart space applications, and (4 an approach for content sharing between autonomous smart space agents. The proposed solutions include ontologies, system models, and guidelines for building smart spaces with the M3 semantic information sharing platform. To validate and demonstrate the approaches in practice, we have implemented various prototype smart space applications and tools.
Berndt, Sarah; Doane, Mike
An evolution of the Semantic Web, the Social Semantic Web (s2w), facilitates knowledge sharing with "useful information based on human contributions, which gets better as more people participate." The s2w reaches beyond the search box to move us from a collection of hyperlinked facts, to meaningful, real time context. When focused through the lens of Enterprise Search, the Social Semantic Web facilitates the fluid transition of meaningful business information from the source to the user. It is the confluence of human thought and computer processing structured with the iterative application of taxonomies, folksonomies, ontologies, and metadata schemas. The importance and nuances of human interaction are often deemphasized when focusing on automatic generation of semantic markup, which results in dissatisfied users and unrealized return on investment. Users consistently qualify the value of information sets through the act of selection, making them the de facto stakeholders of the Social Semantic Web. Employers are the ultimate beneficiaries of s2w utilization with a better informed, more decisive workforce; one not achieved with an IT miracle technology, but by improved human-computer interactions. Johnson Space Center Taxonomist Sarah Berndt and Mike Doane, principal owner of Term Management, LLC discuss the planning, development, and maintenance stages for components of a semantic system while emphasizing the necessity of a Social Semantic Web for the Enterprise. Identification of risks and variables associated with layering the successful implementation of a semantic system are also modeled.
Dombeu, Jean Vincent Fonou; 10.5121/ijwest.2011.2202
One of the key challenges in electronic government (e-government) is the development of systems that can be easily integrated and interoperated to provide seamless services delivery to citizens. In recent years, Semantic Web technologies based on ontology have emerged as promising solutions to the above engineering problems. However, current research practicing semantic development in e-government does not focus on the application of available methodologies and platforms for developing government domain ontologies. Furthermore, only a few of these researches provide detailed guidelines for developing semantic ontology models from a government service domain. This research presents a case study combining an ontology building methodology and two state-of-the-art Semantic Web platforms namely Protege and Java Jena ontology API for semantic ontology development in e-government. Firstly, a framework adopted from the Uschold and King ontology building methodology is employed to build a domain ontology describing th...
Headayetullah, Md; 10.5121/ijcsit.2010.2306
Improved interoperability between public and private organizations is of key significance to make digital government newest triumphant. Digital Government interoperability, information sharing protocol and security are measured the key issue for achieving a refined stage of digital government. Flawless interoperability is essential to share the information between diverse and merely dispersed organisations in several network environments by using computer based tools. Digital government must ensure security for its information systems, including computers and networks for providing better service to the citizens. Governments around the world are increasingly revolving to information sharing and integration for solving problems in programs and policy areas. Evils of global worry such as syndrome discovery and manage, terror campaign, immigration and border control, prohibited drug trafficking, and more demand information sharing, harmonization and cooperation amid government agencies within a country and acros...
Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.
Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.
Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos
Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451
Full Text Available Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information.
Mazzetti, Paolo; D'Auria, Luca; Reitano, Danilo; Papeschi, Fabrizio; Roncella, Roberto; Puglisi, Giuseppe; Nativi, Stefano
In accordance with the international Supersite initiative concept, the MED-SUV (MEDiterranean SUpersite Volcanoes) European project (http://med-suv.eu/) aims to enable long-term monitoring experiment in two relevant geologically active regions of Europe prone to natural hazards: Mt. Vesuvio/Campi Flegrei and Mt. Etna. This objective requires the integration of existing components, such as monitoring systems and data bases and novel sensors for the measurements of volcanic parameters. Moreover, MED-SUV is also a direct contribution to the Global Earth Observation System of Systems (GEOSS) as one the volcano Supersites recognized by the Group on Earth Observation (GEO). To achieve its goal, MED-SUV set up an advanced e-infrastructure allowing the discovery of and access to heterogeneous data for multidisciplinary applications, and the integration with external systems like GEOSS. The MED-SUV overall infrastructure is conceived as a three layer architecture with the lower layer (Data level) including the identified relevant data sources, the mid-tier (Supersite level) including components for mediation and harmonization , and the upper tier (Global level) composed of the systems that MED-SUV must serve, such as GEOSS and possibly other global/community systems. The Data level is mostly composed of existing data sources, such as space agencies satellite data archives, the UNAVCO system, the INGV-Rome data service. They share data according to different specifications for metadata, data and service interfaces, and cannot be changed. Thus, the only relevant MED-SUV activity at this level was the creation of a MED-SUV local repository based on Web Accessible Folder (WAF) technology, deployed in the INGV site in Catania, and hosting in-situ data and products collected and generated during the project. The Supersite level is at the core of the MED-SUV architecture, since it must mediate between the disparate data sources in the layer below, and provide a harmonized view to
Vasilescu, E; Dorobanţu, M; Govoni, S; Padh, S; Mun, S K
An electronic health record depends on the consistent handling of people's identities within and outside healthcare organizations. Currently, the Person Identification Service (PIDS), a CORBA specification, is the only well-researched standard that meets these needs. In this paper, we introduce WS/PIDS, a PIDS specification for Web Services (WS) that closely matches the original PIDS and improves on it by providing explicit support for medical multimedia attributes. WS/PIDS is currently supported by a test implementation, layered on top of a PIDS back-end, with Java- and NET-based, and Web clients. WS/PIDS is interoperable among platforms; it preserves PIDS semantics to a large extent, and it is intended to be fully compliant with established and emerging WS standards. The specification is open source and immediately usable in dynamic clinical systems participating in grid environments. WS/PIDS has been tested successfully with a comprehensive set of use cases, and it is being used in a clinical research setting.
Alowisheq, Areeb; Millard, David E.; Tiropanis, Thanassis
Existing approaches to Semantic Web Services (SWS) require a domain ontology and a semantic description of the service. In the case of lightweight SWS approaches, such as SAWSDL, service description is achieved by semantically annotating existing web service interfaces. Other approaches such as OWL-S and WSMO describe services in a separate ontology. So, existing approaches separate service description from domain description, therefore increasing design efforts. We propose EXPRESS a lightweight approach to SWS that requires the domain ontology definition only. Its simplicity stems from the similarities between REST and the Semantic Web such as resource realization, self describing representations, and uniform interfaces. The semantics of a service is elicited from a resource's semantic description in the domain ontology and the semantics of the uniform interface, hence eliminating the need for ontologically describing services. We provide an example that illustrates EXPRESS and then discuss how it compares to SA-REST and WSMO.
Smirnov, Alexander V.
Full Text Available In the modern social and economic environment of Russia, gratitude might be considered an ambiguous phenomenon. It can have different meaning for a person in different contexts and can manifest itself differently as well (that is, as an expression of sincere feelings or as an element of corruption. In this respect it is topical to investigate the system of meanings and relationships that define the semantic space of gratitude. The goal of the study was the investigation and description of the content and structure of the semantic space of the gratitude phenomenon as well as the determination of male, female, age, and ethnic peculiarities of the expression of gratitude. The objective was achieved by using the semantic differential designed by the authors to investigate attitudes toward gratitude. This investigation was carried out with the participation of 184 respondents (Russians, Tatars, Ukrainians, Jews living in the Russian Federation, Belarus, Kazakhstan, Tajikistan, Israel, Australia, Canada, and the United Kingdom and identifying themselves as representatives of one of these nationalities. The structural components of gratitude were singled out by means of exploratory factor analysis of the empirical data from the designed semantic differential. Gender, age, and ethnic differences were differentiated by means of Student’s t-test. Gratitude can be represented by material and nonmaterial forms as well as by actions in response to help given. The empirical data allowed us to design the ethnically nonspecified semantic structure of gratitude. During the elaboration of the differential, semantic universals of gratitude, which constitute its psychosemantic content, were distinguished. Peculiarities of attitudes toward gratitude by those in different age and gender groups were revealed. Differences in the degree of manifestation of components of the psychosemantic structure of gratitude related to ethnic characteristics were not discovered
sections provide best practice information (articles, papers , website, etc.) 5.1.1 OCU Physical Attributes 22.214.171.124 Standards Number Document...information (articles, papers , website, etc.). In future versions of this document, additional tables will be developed, containing standards and guidelines...Mounted Display ICD Interface Control Document IEC International Electrotechnical Commission IMS Intelligent Munitions Systems IOP Interoperability
Demchenko, Y.; Makkes, M.X.; Strijkers, R.J.; Laat, C. de
This paper presents on-going research to develop the Intercloud Architecture Framework (ICAF) that addresses problems in multi-provider multi-domain heterogeneous cloud based infrastructure services and applications integration and interoperability. The paper refers to existing standards in Cloud Co
Demchenko, Y.; Makkes, M.X.; Strijkers, R.; de Laat, C.
This paper presents on-going research to develop the Intercloud Architecture Framework (ICAF) that addresses problems in multi-provider multi-domain heterogeneous cloud based infrastructure services and applications integration and interoperability. The paper refers to existing standards in Cloud Co
Asim, M.; Petkovic, M.; Qu, M.; Wang, C.
Connected and interoperable healthcare system promises to reduce thecost of the healthcare delivery, increase its efficiency and enableconsumers to better engage with clinicians and manage their care. However at the same time it introduces new risks towards security andprivacy of personal health inf
Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin
The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.
Savio, E.; Carmignato, S.; De Chiffre, Leonardo
these inefficiencies. The paper presents a methodology for an economic evaluation of interoperability benefits with respect to the verification of geometrical product specifications. It requires input data from testing and inspection activities, as well as information on training of personnel and licensing of software...
Wyborn, L. A.; Evans, B. J. K.; Trenham, C.; Druken, K. A.; Wang, J.
The National Computational Infrastructure (NCI) at the Australian National University (ANU) has collocated over 10 PB of national and international data assets within a HPC facility to create the National Environmental Research Data Interoperability Platform (NERDIP). The data span a wide range of fields from the earth systems and environment (climate, coasts, oceans, and geophysics) through to astronomy, bioinformatics, and the social sciences. These diverse data collections are collocated on a major data storage node that is linked to a Petascale HPC and Cloud facility. Users can search across all of the collections and either log in and access the data directly, or they can access the data via standards-based web services. These collocated petascale data collections are theoretically a massive resource for interdisciplinary science at scales and resolutions never hitherto possible. But once collocated, multiple barriers became apparent that make cross-domain data integration very difficult and often so time consuming, that either less ambitious research goals are attempted or the project is abandoned. Incompatible content is only one half of the problem: other showstoppers are differing access models, licences and issues of ownership of derived products. Brokers can enable interdisciplinary research but in reality are we just delaying the inevitable? A call to action is required adopt a transdiciplinary approach at the conception of development of new multi-disciplinary systems whereby those across all the scientific domains, the humanities, social sciences and beyond work together to create a unity of informatics plaforms that interoperate horizontally across the multiple discipline boundaries, and also operate vertically to enable a diversity of people to access data from high end researchers, to undergraduate, school students and the general public. Once we master such a transdisciplinary approach to our vast global information assets, we will then achieve
Kuo, K. S.; Ramachandran, R.
The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center
Full Text Available This article presents a specific issue of the semantic analysis of texts in natural language – text indexing and describes one field of its application (web browsing.The main part of this article describes the computer system assigning a set of semantic indexes (similar to keywords to a particular text. The indexing algorithm employs a semantic dictionary to find specific words in a text, that represent a text content. Furthermore it compares two given sets of semantic indexes to determine texts’ similarity (assigning numerical value. The article describes the semantic dictionary – a tool essentialto accomplish this task and its usefulness, main concepts of the algorithm and test results.
Miller, G A; Fellbaum, C
Principles of lexical semantics developed in the course of building an on-line lexical database are discussed. The approach is relational rather than componential. The fundamental semantic relation is synonymy, which is required in order to define the lexicalized concepts that words can be used to express. Other semantic relations between these concepts are then described. No single set of semantic relations or organizational structure is adequate for the entire lexicon: nouns, adjectives, and verbs each have their own semantic relations and their own organization determined by the role they must play in the construction of linguistic messages.
Full Text Available Emergency management becomes more challenging in international crisis episodes because of cultural, semantic and linguistic differences between all stakeholders, especially first responders. Misunderstandings between first responders makes decision-making slower and more difficult. However, spread and development of networks and IT-based Emergency Management Systems (EMS has improved emergency responses, becoming more coordinated. Despite improvements made in recent years, EMS have not still solved problems related to cultural, semantic and linguistic differences which are the real cause of slower decision-making. In addition, from a technical perspective, the consolidation of current EMS and the different formats used to exchange information offers another problem to be solved in any solution proposed for information interoperability between heterogeneous EMS surrounded by different contexts. To overcome these problems we present a software solution based on semantic and mediation technologies. EMERGency ELements (EMERGEL (Fundacion CTIC and AntwortING Ingenieurbüro PartG 2013, a common and modular ontology shared by all the stakeholders, has been defined. It offers the best solution to gather all stakeholders' knowledge in a unique and flexible data model, taking into account different countries cultural linguistic issues. To deal with the diversity of data protocols and formats, we have designed a Service Oriented Architecture for Data Interoperability (named DISASTER providing a flexible extensible solution to solve the mediation issues. Web Services have been adopted as specific technology to implement such paradigm that has the most significant academic and industrial visibility and attraction. Contributions of this work have been validated through the design and development of a cross-border realistic prototype scenario, actively involving both emergency managers and emergency first responders: the Netherlands–Germany border fire.
Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.
McGuinness, Deborah; Fox, Peter; Hendler, James
The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF
The availability of geographic and geospatial information and services, especially on the open Web has become abundant in the last several years with the proliferation of online maps, geo-coding services, geospatial Web services and geospatially enabled applications. The need for geospatial reasoning has significantly increased in many everyday applications including personal digital assistants, Web search applications, local aware mobile services, specialized systems for emergency response, medical triaging, intelligence analysis and more. Geospatial Semantics and the Semantic Web: Foundation
Spalazzese, Romina; 10.4204/EPTCS.37.3
A key objective for ubiquitous environments is to enable system interoperability between system's components that are highly heterogeneous. In particular, the challenge is to embed in the system architecture the necessary support to cope with behavioral diversity in order to allow components to coordinate and communicate. The continuously evolving environment further asks for an automated and on-the-fly approach. In this paper we present the design building blocks for the dynamic and on-the-fly interoperability between heterogeneous components. Specifically, we describe an Architectural Pattern called Mediating Connector, that is the key enabler for communication. In addition, we present a set of Basic Mediator Patterns, that describe the basic mismatches which can occur when components try to interact, and their corresponding solutions.
alongside and in conjunction with existing information management systems ( IMSs ) (as illustrated in Figure 2) to manage the production, delivery, and...of interfaces for interoperation between the QMS and various IMSs and publication/subscription (pub/sub) services. The interfaces support (1... IMSs . 2.1 QMS Core Components QMS Core consists of core modules and a connectivity monitor. The core modules are illustrated in Figure 7 and
The Federal Aviation Administration (FAA) is pioneering a transformation of the national airspace system from its present ground based navigation and landing systems to a satellite based system using the Global Positioning System (GPS). To meet the critical safety-of-life aviation positioning requirements, a Satellite-Based Augmentation System (SBAS), the Wide Area Augmentation System (WAAS), is being implemented to support navigation for all phases of flight, including Category I precision approach. The system is designed to be used as a primary means of navigation, capable of meeting the Required Navigation Performance (RNP), and therefore must satisfy the accuracy, integrity, continuity and availability requirements. In recent years there has been international acceptance of Global Navigation Satellite Systems (GNSS), spurring widespread growth in the independent development of SBASs. Besides the FAA's WAAS, the European Geostationary Navigation Overlay Service System (EGNOS) and the Japan Civil Aviation Bureau's MTSAT-Satellite Augmentation System (MSAS) are also being actively developed. Although all of these SBASs can operate as stand-alone, regional systems, there is increasing interest in linking these SBASs together to reduce costs while improving service coverage. This research investigated the coverage and availability improvements due to cooperative efforts among regional SBAS networks. The primary goal was to identify the optimal interoperation strategies in terms of performance, complexity and practicality. The core algorithms associated with the most promising concepts were developed and demonstrated. Experimental verification of the most promising concepts was conducted using data collected from a joint international test between the National Satellite Test Bed (NSTB) and the EGNOS System Test Bed (ESTB). This research clearly shows that a simple switch between SBASs made by the airborne equipment is the most effective choice for achieving the
The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with
Palmer, Martha; Xue, Nianwen
This book is aimed at providing an overview of several aspects of semantic role labeling. Chapter 1 begins with linguistic background on the definition of semantic roles and the controversies surrounding them. Chapter 2 describes how the theories have led to structured lexicons such as FrameNet, VerbNet and the PropBank Frame Files that in turn provide the basis for large scale semantic annotation of corpora. This data has facilitated the development of automatic semantic role labeling systems based on supervised machine learning techniques. Chapter 3 presents the general principles of applyin
Pollock, Jeffrey T
Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t
Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.
Konstadinidou, Aggeliki; Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios
This paper presents the Semantic Alignment Tool, a unified, classified, ontological framework, for the description of assistive solutions that comprises information from different sources automatically. The Semantic Alignment Tool is a component of the Cloud4All/GPII infrastructure that enables users to add and/or modify descriptions of assistive technologies and align their specific settings with similar settings in an ontological model based on ISO 9999. The current work presents the interaction of the Semantic Alignment Tool with external sources that contain descriptions and metadata for Assistive Technologies (ATs) in order to achieve their synchronization in the same semantic model.
Full Text Available Governmental data are being published in many countries, providing an unprecedented opportunity to create innovative services and to increase societal awareness about administration dynamics. In particular, semantic technologies for linked data production and exploitation prove to be ideal for managing identity and interoperability of administrative entities and data. This paper presents the current state of art, and evolution scenarios of these technologies, with reference to several case studies, including two of them from the Italian context: CNR's Semantic Scout, and DigitPA's Linked Open IPA.
Full Text Available As geographic information interoperability and sharing developing, more and more interoperable OGC (open geospatial consortium Web services (OWS are generated and published through the internet. These services can facilitate the integration of different scientific applications by searching, finding, and utilizing the large number of scientific data and Web services. However, these services are widely dispersed and hard to be found and utilized with executive semantic retrieval. This is especially true when considering the weak semantic description of geographic information service data. Focusing on semantic retrieval and reasoning of the distributed OWS resources, a deductive and semantic reasoning method is proposed to describe and search relevant OWS resources. Specifically, ①description words are extracted from OWS metadata file to generate GISe ontology-database and instance-database based on geographic ontology according to basic geographic elements category, ②a description words reduction model is put forward to implement knowledge reduction on GISe instance-database based on rough set theory and generate optimized instances database, ③utilizing GISe ontology-database and optimized instance-database to implement semantic inference and reasoning of geographic searching objects is used as an example to demonstrate the efficiency, feasibility and recall ration of the proposed description-word-based reduction model.
Daclin, Nicolas; Chen, David; Vallespir, Bruno
The aim of this article is to present a methodology for guiding enterprises to implement and improve interoperability. This methodology is based on three components: a framework of interoperability which structures specific solutions of interoperability and is composed of abstraction levels, views and approaches dimensions; a method to measure interoperability including interoperability before (maturity) and during (operational performances) a partnership; and a structured approach defining the steps of the methodology, from the expression of an enterprise's needs to implementation of solutions. The relationship which consistently relates these components forms the methodology and enables developing interoperability in a step-by-step manner. Each component of the methodology and the way it operates is presented. The overall approach is illustrated in a case study example on the basis of a process between a given company and its dealers. Conclusions and future perspectives are given at the end of the article.
Full Text Available In this work, we provide two case studies on interoperability and transfer of knowledge in the environment of acompany dealing with plant protection. We find that the area of plant protection is highly oriented on workingwith knowledge. In this case interoperability of knowledge can play an important role in acquiring knowledgefrom different environments to solve specific problem in companies dealing with plant protection. Nevertheless,the concept of interoperability is well-developed on the level of data and information only.We stem from our previous works, where we defined a logical concept for the interoperability of knowledge onthe level of knowledge units. The objective of this work is to show how to apply our process model ofknowledge interoperability in a particular plant protection company. Two case studies are provided in order todemonstrate distinguishing between simple knowledge transfer and knowledge interoperability.
Cömert, Çetin; Ulutaş, Deniztan; Akıncı, Halil; Kara, Gülten
The aim of this work was to get acquainted with semantic web services (SWS) and assess their potential for the implementation of technical interoperability infrastructure of Spatial Data Infrastructures (SDIs). SDIs are widely accepted way of enabling collaboration among various parties allowing sharing of “data” and “services” of each other. Collaboration is indispensable given either the business model or other requirements such as that of “Sustainable Development” of the date. SDIs can be ...
Craft, Richard Layne, II
In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.
Sbodio, Marco Luca; Moulin, Claude; Benamou, Norbert; Barthès, Jean-Paul
This chapter describes the major aspects of an e-government platform in which semantics underpins more traditional technologies in order to enable new capabilities and to overcome technical and cultural challenges. The design and development of such an e-government Semantic Platform has been conducted with the financial support of the European Commission through the Terregov research project: "Impact of e-government on Territorial Government Services" (Terregov 2008). The goal of this platform is to let local government and government agencies offer online access to their services in an interoperable way, and to allow them to participate in orchestrated processes involving services provided by multiple agencies. Implementing a business process through an electronic procedure is indeed a core goal in any networked organization. However, the field of e-government brings specific constraints to the operations allowed in procedures, especially concerning the flow of private citizens' data: because of legal reasons in most countries, such data are allowed to circulate only from agency to agency directly. In order to promote transparency and responsibility in e-government while respecting the specific constraints on data flows, Terregov supports the creation of centrally controlled orchestrated processes; while the cross agencies data flows are centrally managed, data flow directly across agencies.
Piana, Fabrizio; Lombardo, Vincenzo; Mimmo, Dario; Giardino, Marco; Fubelli, Giandomenico
In modern digital geological maps, namely those supported by a large geo-database and devoted to dynamical, interactive representation on WMS-WebGIS services, there is the need to provide, in an explicit form, the geological assumptions used for the design and compilation of the database of the Map, and to get a definition and/or adoption of semantic representation and taxonomies, in order to achieve a formal and interoperable representation of the geologic knowledge. These approaches are fundamental for the integration and harmonisation of geological information and services across cultural (e.g. different scientific disciplines) and/or physical barriers (e.g. administrative boundaries). Initiatives such as GeoScience Markup Language (last version is GeoSciML 4.0, 2015, http://www.geosciml.org) and the INSPIRE "Data Specification on Geology" http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG) have been promoting information exchange of the geologic knowledge. Grounded on these standard vocabularies, schemas and data models, we provide a shared semantic classification of geological data referring to the study case of the synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap", developed by the CNR Institute of Geosciences and Earth Resources, Torino (CNR IGG TO) and hosted as a dynamical interactive map on the geoportal of ARPA Piemonte Environmental Agency. The Piemonte Geological Map is grounded on a regional-scale geo-database consisting of some hundreds of GeologicUnits whose thousands instances (Mapped Features, polygons geometry) widely occur in Piemonte region, and each one is bounded by GeologicStructures (Mapped Features, line geometry). GeologicUnits and GeologicStructures have been spatially
This tech talk describes how to write and how to inter-derive formal semantics for sequential programming languages. The progress reported here is (1) concrete guidelines to write each formal semantics to alleviate their proof obligations, and (2) simple calculational tools to obtain a formal...
Thayer, Lee, Ed.
This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…
Grasten, Maj Lervad
Book review of: Semantics of Statebuilding: Language, Meanings & Sovereignty / (eds) Nicolas Lemay-Hébert, Nicholas Onuf, Vojin Rakić, Petar Bojanić. Abingdon: Routledge, 2014. 200 pp.......Book review of: Semantics of Statebuilding: Language, Meanings & Sovereignty / (eds) Nicolas Lemay-Hébert, Nicholas Onuf, Vojin Rakić, Petar Bojanić. Abingdon: Routledge, 2014. 200 pp....
Sicilia, Miguel-Angel; Lytras, Miltiadis D.
Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…
Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.
Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.
de Lencastre Hermínia
Full Text Available Abstract Background The value and usefulness of data increases when it is explicitly interlinked with related data. This is the core principle of Linked Data. For life sciences researchers, harnessing the power of Linked Data to improve biological discovery is still challenged by a need to keep pace with rapidly evolving domains and requirements for collaboration and control as well as with the reference semantic web ontologies and standards. Knowledge organization systems (KOSs can provide an abstraction for publishing biological discoveries as Linked Data without complicating transactions with contextual minutia such as provenance and access control. We have previously described the Simple Sloppy Semantic Database (S3DB as an efficient model for creating knowledge organization systems using Linked Data best practices with explicit distinction between domain and instantiation and support for a permission control mechanism that automatically migrates between the two. In this report we present a domain specific language, the S3DB query language (S3QL, to operate on its underlying core model and facilitate management of Linked Data. Results Reflecting the data driven nature of our approach, S3QL has been implemented as an application programming interface for S3DB systems hosting biomedical data, and its syntax was subsequently generalized beyond the S3DB core model. This achievement is illustrated with the assembly of an S3QL query to manage entities from the Simple Knowledge Organization System. The illustrative use cases include gastrointestinal clinical trials, genomic characterization of cancer by The Cancer Genome Atlas (TCGA and molecular epidemiology of infectious diseases. Conclusions S3QL was found to provide a convenient mechanism to represent context for interoperation between public and private datasets hosted at biomedical research institutions and linked data formalisms.
Pan Jiayi; Chin-Pang Jack Cheng; Gloria T. Lau; Kincho H. Law
The objective of this paper is to introduce three semi-automated approaches for ontology mapping using relatedness analysis techniques. In the architecture, engineering, and construction (AEC) industry, there exist a number of ontological standards to describe the semantics of building models. Although the standards share similar scopes of interest, the task of comparing and mapping concepts among standards is challenging due to their differences in terminologies and perspectives. Ontology mapping is therefore necessary to achieve information interoperability, which allows two or more information sources to exchange data and to re-use the data for further purposes. The attribute-based approach, corpus-based approach, and name-based approach presented in this paper adopt the statistical relatedness analysis techniques to discover related concepts from heterogeneous ontologies. A pilot study is conducted on IFC and CIS/2 ontologies to evaluate the approaches. Preliminary results show that the attribute-based approach outperforms the other two approaches in terms of precision and F-measure.
Marshall M Scott
Full Text Available Abstract Background A fundamental goal of the U.S. National Institute of Health (NIH "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG, set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need
El Fadly, A; Daniel, C; Bousquet, C; Dart, T; Lastic, P-Y; Degoulet, P
Integrating clinical research data entry with patient care data entry is a challenging issue. At the G. Pompidou European Hospital (HEGP), cardiovascular radiology reports are captured twice, first in the Electronic Health Record (EHR) and then in a national clinical research server. Informatics standards are different for EHR (HL7 CDA) and clinical research (CDISC ODM). The objective of this work is to feed both the EHR and a Clinical Research Data Management System (CDMS) from a single multipurpose form. We adopted and compared two approaches. First approach consists in implementing the single "care-research" form within the EHR and aligning XML structures of HL7 CDA document and CDISC ODM message to export relevant data from EHR to CDMS. Second approach consists in displaying a single "care-research" XForms form within the EHR and generating both HL7 CDA document and CDISC message to feed both EHR and CDMS. The solution based on XForms avoids overloading both EHR and CDMS with irrelevant information. Beyond syntactic interoperability, a perspective is to address the issue of semantic interoperability between both domains.
Ayre, Lori Bowen
The approval by The National Information Standards Organization (NISO) of a new standard for RFID in libraries is a big step toward interoperability among libraries and vendors. By following this set of practices and procedures, libraries can ensure that an RFID tag in one library can be used seamlessly by another, assuming both comply, even if they have different suppliers for tags, hardware, and software. In this issue of Library Technology Reports, Lori Bowen Ayre, an experienced implementer of automated materials handling systems, Provides background on the evolution of the standard
Bagnasco, S.; Barbera, R; Buncic, P.; Carminati, F.; Cerello, P.; Saiz, P.
AliEn (ALICE Environment) is a GRID-like system for large scale job submission and distributed data management developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. With the aim of exploiting upcoming Grid resources to run AliEn-managed jobs and store the produced data, the problem of AliEn-EDG interoperability was addressed and an in-terface was designed. One or more EDG (European Data Grid) User Interface machines run the AliEn software suite (Cluster Monitor, Stora...
Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.
A presentation is provided to demonstrate the interoperability between two space flight Mission Operation Centers (MOCs) and to emulate telemetry, actions, and alert flows between the two centers. One framework uses a COTS C31 system that uses CORBA to interface to the local OTF data network. The second framework relies on current Houston MCC frameworks and ad hoc clients. Messaging relies on SM and C MAL, Core and Common Service formats, while the transport layer uses AMS. A centralized SM and C Registry uses HTTP/XML for transport/encoding. The project's status and progress are reviewed.
Full Text Available The Open Solutions Alliance is a consortium of leading commercial open source vendors, integrators and end users dedicated to the growth of open source based solutions in the enterprise. We believe Linux and other infrastructure software, such as Apache, has become mainstream, and packaged solutions represent the next great growth opportunity. However some unique challenges can temper that opportunity. These challenges include getting the word out about the maturity and enterprise-readiness of those solutions, ensuring interoperability both with each other and with other proprietary and legacy solutions, and ensuring healthy collaboration between vendors and their respective customer and developer communities.
Zimmermann, Antoine; Sahay, Ratnesh; Fox, Ronan; Polleres, Axel
The need for semantics preserving integration of complex data has been widely recognized in the healthcare domain. While standards such as Health Level Seven (HL7) have been developed in this direction, they have mostly been applied in limited, controlled environments, still being used incoherently across countries, organizations, or hospitals. In a more mobile and global society, data and knowledge are going to be commonly exchanged between various systems at Web scale. Specialists in this domain have increasingly argued in favor of using Semantic Web technologies for modeling healthcare data in a well formalized way. This paper provides a reality check in how far current Semantic Web standards can tackle interoperability issues arising in such systems driven by the modeling of concrete use cases on exchanging clinical data and practices. Recognizing the insufficiency of standard OWL to model our scenario, we survey theoretical approaches to extend OWL by modularity and context towards handling heterogeneity in Semantic-Web-enabled health care and life sciences (HCLS) systems. We come to the conclusion that none of these approaches addresses all of our use case heterogeneity aspects in its entirety. We finally sketch paths on how better approaches could be devised by combining several existing techniques.
Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G
The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play.
Solou, Dimitra; Dimopoulou, Efi
In recent years the development of technology and the lifting of several technical limitations, has brought the third dimension to the fore. The complexity of urban environments and the strong need for land administration, intensify the need of using a three-dimensional cadastral system. Despite the progress in the field of geographic information systems and 3D modeling techniques, there is no fully digital 3D cadastre. The existing geographic information systems and the different methods of three-dimensional modeling allow for better management, visualization and dissemination of information. Nevertheless, these opportunities cannot be totally exploited because of deficiencies in standardization and interoperability in these systems. Within this context, CityGML was developed as an international standard of the Open Geospatial Consortium (OGC) for 3D city models' representation and exchange. CityGML defines geometry and topology for city modeling, also focusing on semantic aspects of 3D city information. The scope of CityGML is to reach common terminology, also addressing the imperative need for interoperability and data integration, taking into account the number of available geographic information systems and modeling techniques. The aim of this paper is to develop an application for managing semantic information of a model generated based on procedural modeling. The model was initially implemented in CityEngine ESRI's software, and then imported to ArcGIS environment. Final goal was the original model's semantic enrichment and then its conversion to CityGML format. Semantic information management and interoperability seemed to be feasible by the use of the 3DCities Project ESRI tools, since its database structure ensures adding semantic information to the CityEngine model and therefore automatically convert to CityGML for advanced analysis and visualization in different application areas.
Kaplan, I L
Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.
Rook, M.; Biljecki, F.; Diakité, A. A.
The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.
Durbha, S. S.; King, R. L.; Shah, V. P.; Younan, N. H.
There is a growing demand for digital databases of topographic and thematic information for a multitude of applications in environmental management, and also in data integration and efficient updating of other spatially oriented data. These thematic data sets are highly heterogeneous in syntax, structure and semantics as they are produced and provided by a variety of agencies having different definitions, standards and applications of the data. In this paper, we focus on the semantic heterogeneity in thematic information sources, as it has been widely recognized that the semantic conflicts are responsible for the most serious data heterogeneity problems hindering the efficient interoperability between heterogeneous information sources. In particular, we focus on the semantic heterogeneities present in the land cover classification schemes corresponding to the global land cover characterization data. We propose a framework (semantics enabled thematic data Integration (SETI)) that describes in depth the methodology involved in the reconciliation of such semantic conflicts by adopting the emerging semantic web technologies. Ontologies were developed for the classification schemes and a shared-ontology approach for integrating the application level ontologies as described. We employ description logics (DL)-based reasoning on the terminological knowledge base developed for the land cover characterization which enables querying and retrieval that goes beyond keyword-based searches.
Hanan M. Alghamdi
Full Text Available To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve clustering with semantic similarities. This approach comprises integrating k-means document clustering with semantic feature extraction and document vectorization to group Arabic web pages according to semantic similarities and then show the semantic annotation. The document vectorization helps to transform text documents into a semantic class probability distribution or semantic class density. To reach semantic similarities, the approach extracts the semantic class features and integrates them into the similarity weighting schema. The quality of the clustering result has evaluated the use of the purity and the mean intra-cluster distance (MICD evaluation measures. We have evaluated the proposed approach on a set of common Arabic news web pages. We have acquired favorable clustering results that are effective in minimizing the MICD, expanding the purity and lowering the runtime.
A coherent and integrated account of the leading UML 2 semantics work and the practical applications of UML semantics development With contributions from leading experts in the field, the book begins with an introduction to UML and goes on to offer in-depth and up-to-date coverage of: The role of semantics Considerations and rationale for a UML system model Definition of the UML system model UML descriptive semantics Axiomatic semantics of UML class diagrams The object constraint language Axiomatic semantics of state machines A coalgebraic semantic framework for reasoning about interaction des
Voronov, A.; Englund, C.; Bengtsson, H.H.; Chen, L.; Ploeg, J.; Jongh, J.F.C.M. de; Sluis, H.J.D. van de
This paper presents the architecture of an Interactive Test Tool (ITT) for interoperability testing of Cooperative Intelligent Transport Systems (C-ITS). Cooperative systems are developed by different manufacturers at different locations, which makes interoperability testing a tedious task. Up until
WANG ZhiLiang; YIN Xia; JING ChuanMing
Interoperability testing is an important technique to ensure the quality of implementations of network communication protocol. In the next generation Internet protocol, real-time applications should be supported effectively. However, time constraints were not considered in the related studies of protocol interoperability testing, so existing interoperability testing methods are difficult to be applied in real-time protocol interoperability testing. In this paper, a formal method to realtime protocol interoperability testing is proposed. Firstly, a formal model CMpTIOA (communicating multi-port timed input output automata) is defined to specify the system under test (SUT) in real-time protocol interoperability testing; based on this model, timed interoperability relation is then defined. In order to check this relation,a test generation method is presented to generate a parameterized test behavior tree from SUT model; a mechanism of executability pre-determination is also integrated in the test generation method to alleviate state space explosion problem to some extent. The proposed theory and method are then applied in interoperability testing of IPv6 neighbor discovery protocol to show the feasibility of this method.
Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.; de Laat, C.; Zimmermann, W.; Lee, Y.W.; Demchenko, Y.
This paper presents an on-going research to develop the Inter-Cloud Architecture, which addresses the architectural problems in multi-provider multi-domain heterogeneous cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure s
Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.J.; Laat, C. de
This paper presents on-going research to develop the Inter-Cloud Architecture that should address problems in multi-provider multi-domain heterogeneous Cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure services. Cloud tec
The Interoperability of Demand Response Resources Demonstration in NY (Interoperability Project) was awarded to Con Edison in 2009. The objective of the project was to develop and demonstrate methodologies to enhance the ability of customer sited Demand Response resources to integrate more effectively with electric delivery companies and regional transmission organizations.
This book constitutes the thoroughly refereed post conference proceedings of the first edition of the Semantic Web Evaluation Challenge, SemWebEval 2014, co-located with the 11th Extended Semantic Web conference, held in Anissaras, Crete, Greece, in May 2014. This book includes the descriptions of all methods and tools that competed at SemWebEval 2014, together with a detailed description of the tasks, evaluation procedures and datasets. The contributions are grouped in three areas: semantic publishing (sempub), concept-level sentiment analysis (ssa), and linked-data enabled recommender systems (recsys).
Gabbay, Dov M
This text offers an extension to the traditional Kripke semantics for non-classical logics by adding the notion of reactivity. Reactive Kripke models change their accessibility relation as we progress in the evaluation process of formulas in the model. This feature makes the reactive Kripke semantics strictly stronger and more applicable than the traditional one. Here we investigate the properties and axiomatisations of this new and most effective semantics, and we offer a wide landscape of applications of the idea of reactivity. Applied topics include reactive automata, reactive grammars, rea
Booth, N. L.; Brodaric, B.; Lucido, J. M.; Kuo, I.; Boisvert, E.; Cunningham, W. L.
using the OGC Sensor Observation Service (SOS) standard. Ground Water Markup Language (GWML) encodes well log, lithology and construction information and is exchanged using the OGC Web Feature Service (WFS) standard. Within the NGWMN Data Portal, data exchange between distributed data provider repositories is achieved through the use of these web services and a central mediation hub, which performs both format (syntactic) and nomenclature (semantic) mediation, conforming heterogeneous inputs into common standards-based outputs. Through these common standards, interoperability between the U.S. NGWMN and Canada's Groundwater Information Network (GIN) is achieved, advancing a ground water virtual observatory across North America.
Bhatt, Tejas; Zhang, Jianrong Janet
Despite the best efforts of food safety and food defense professionals, contaminated food continues to enter the food supply. It is imperative that contaminated food be removed from the supply chain as quickly as possible to protect public health and stabilize markets. To solve this problem, scores of technology companies purport to have the most effective, economical product tracing system. This study sought to compare and contrast the effectiveness of these systems at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. It also determined if these systems can work together to better secure the food supply (their interoperability). Institute of Food Technologists (IFT) hypothesized that when technology providers are given a full set of supply-chain data, even for a multi-ingredient product, their systems will generally be able to trace a contaminated product forward and backward through the supply chain. However, when provided with only a portion of supply-chain data, even for a product with a straightforward supply chain, it was expected that interoperability of the systems will be lacking and that there will be difficulty collaborating to identify sources and/or recipients of potentially contaminated product. IFT provided supply-chain data for one complex product to 9 product tracing technology providers, and then compared and contrasted their effectiveness at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. A vertically integrated foodservice restaurant agreed to work with IFT to secure data from its supply chain for both a multi-ingredient and a simpler product. Potential multi-ingredient products considered included canned tuna, supreme pizza, and beef tacos. IFT ensured that all supply-chain data collected did not include any proprietary information or information that would otherwise
Full Text Available Interoperability is not a new area of effort at NATO level. In fact, interoperability and more specifi cally standardization, has been a key element of the Alliance’s approach to fi elding forces for decades. But as the security and operational environment has been in a continuous change, the need to face the new threats and the current involvement in challenging operations in Afghanistan and elsewhere alongside with the necessity to interoperate at lower and lower levels of command with an increasing number of nations, including non-NATO ISAF partners, NGOs, and other organizations, have made the task even more challenging. In this respect Interoperability Integration within NATO Defense Planning Process will facilitate the timely identifi cation, development and delivery of required forces and capabilities that are interoperable and adequately prepared, equipped, trained and supported to undertake the Alliance’s full spectrum of missions.
The relationship among diverse fuzzy semantics vs. the corresponding logic consequence operators has been analyzed systematically. The results that compactness and logical compactness of fuzzy semantics are equivalent to compactness and continuity of the logic consequence operator induced by the semantics respectively have been proved under certain conditions. A general compactness theorem of fuzzy semantics have been established which says that every fuzzy semantics defined on a free algebra with members corresponding to continuous functions is compact.
Full Text Available This paper proposes a vision-based Semantic Unscented FastSLAM (UFastSLAM algorithm for mobile service robot combining the semantic relationship and the Unscented FastSLAM. The landmark positions and the semantic relationships among landmarks are detected by a binocular vision. Then the semantic observation model can be created by transforming the semantic relationships into the semantic metric map. Semantic Unscented FastSLAM can be used to update the locations of the landmarks and robot pose even when the encoder inherits large cumulative errors that may not be corrected by the loop closure detection of the vision system. Experiments have been carried out to demonstrate that the Semantic Unscented FastSLAM algorithm can achieve much better performance in indoor autonomous surveillance than Unscented FastSLAM.
Semantics is the study of the meanings of words and sentences. While word is the most basic unit in every language and the understanding of the word meaning is the most important problem in translation. Therefore, the analysis of semantics just provides a very direct approach to doing translation. In this paper, I’d like to focus on the three kinds of word meaning in transla- tion, the ambiguities caused by the word meaning and how to deal with such ambiguities.
Md. Rashedul Hasan
Full Text Available The invention of the Semantic Web and related technologies is fostering a computing paradigm that entails a shift from databases to Knowledge Bases (KBs. There the core is the ontology that plays a main role in enabling reasoning power that can make implicit facts explicit; in order to produce better results for users. In addition, KB-based systems provide mechanisms to manage information and semantics thereof, that can make systems semantically interoperable and as such can exchange and share data between them. In order to overcome the interoperability issues and to exploit the benefits offered by state of the art technologies, we moved to KB-based system. This paper presents the development of an earthquake engineering ontology with a focus on research project management and experiments. The developed ontology was validated by domain experts, published in RDF and integrated into WordNet. Data originating from scientific experiments such as cyclic and pseudo dynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes.
郑红; 陆汝钤; 金芝; 胡思康
When querying on a large-scale knowledge base, a major technique of im-proving performance is to preload knowledge to minimize the number of roundtrips to theknowledge base. In this paper, an ontology-based semantic cache is proposed for an agentand ontology-oriented knowledge base (AOKB). In AOKB, an ontology is the collection of re-lationships between a group of knowledge units (agents and/or other sub-ontologies). Whenloading some agent A, its relationships with other knowledge units are examined, and thosewho have a tight semantic tie with A will be preloaded at the same time, including agents andsub-ontologies in the same ontology where A is. The preloaded agents and ontologies are savedat a semantic cache located in the memory. Test results show that up to 50% reduction inrunning time is achieved.
Magee, Thoman [Consolidated Edison Company Of New York, Inc., NY (United States)
The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), battery storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG
The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), battery storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG
Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V
Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.
GU Ning; XU Xuebiao; SHI Baile
In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object manipulation and query based on heterogeneous multiple data sources under client/server environment. The common object model is compatible with ODMG2.0 and OMG's CORBA, which provides main OO features such as OID, attribute, method, inheritance, reference, etc. Three types of interfaces, namely Vface, IOQL and C++ API, are given to provide the database programmer with tools and functionalities for application development. Nested transactions and compensating technology are adopted in transaction manager. In discussing some key implementation techniques, translation and mapping approaches from various schemata to a common object schema are proposed. Buffer management provides the data caching policy and consistency maintenance of cached data. Version management presents some operations based on the definitions in semantic version model, and introduces the implementation of the semantic version graph.
Full Text Available Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.
We present AceWiki, a prototype of a new kind of semantic wiki using the controlled natural language Attempto Controlled English (ACE) for representing its content. ACE is a subset of English with a restricted grammar and a formal semantics. The use of ACE has two important advantages over existing semantic wikis. First, we can improve the usability and achieve a shallow learning curve. Second, ACE is more expressive than the formal languages of existing semantic wikis. Our evaluation shows that people who are not familiar with the formal foundations of the Semantic Web are able to deal with AceWiki after a very short learning phase and without the help of an expert.
Wandji Tchami, Ornella; L'Homme, Marie-Claude; Grabar, Natalia
The field of medicine gathers actors with different levels of expertise. These actors must interact, although their mutual understanding is not always completely successful. We propose to study corpora (with high and low levels of expertise) in order to observe their specificities. More specifically, we perform a contrastive analysis of verbs, and of the syntactic and semantic features of their participants, based on the Frame Semantics framework and the methodology implemented in FrameNet. In order to achieve this, we use an existing medical terminology to automatically annotate the semantics classes of participants of verbs, which we assume are indicative of semantics roles. Our results indicate that verbs show similar or very close semantics in some contexts, while in other contexts they behave differently. These results are important for studying the understanding of medical information by patients and for improving the communication between patients and medical doctors.
Ranking and optimization of web service compositions are some of the most interesting challenges at present. Since web services can be enhanced with formal semantic descriptions, forming the "semantic web services", it becomes conceivable to exploit the quality of semantic links between services (of any composition) as one of the optimization criteria. For this we propose to use the semantic similarities between output and input parameters of web services. Coupling this with other criteria such as quality of service (QoS) allow us to rank and optimize compositions achieving the same goal. Here we suggest an innovative and extensible optimization model designed to balance semantic fit (or functional quality) with non-functional QoS metrics. To allow the use of this model in the context of a large number of services as foreseen by the strategic EC-funded project SOA4All we propose and test the use of Genetic Algorithms.
Leadbetter, Adam; Lowry, Roy; Clements, Oliver
Over recent years, there has been a proliferation of environmental data portals utilising a wide range of systems and services, many of which cannot interoperate. The European Union Framework 7 project NETMAR (that commenced February 2010) aims to provide a toolkit for building such portals in a coherent manner through the use of chained Open Geospatial Consortium Web Services (WxS), OPeNDAP file access and W3C standards controlled by a Business Process Execution Language workflow. As such, the end product will be configurable by user communities interested in developing a portal for marine environmental data, and will offer search, download and integration tools for a range of satellite, model and observed data from open ocean and coastal areas. Further processing of these data will also be available in order to provide statistics and derived products suitable for decision making in the chosen environmental domain. In order to make the resulting portals truly interoperable, the NETMAR programme requires a detailed definition of the semantics of the services being called and the data which are being requested. A key goal of the NETMAR programme is, therefore, to develop a multi-domain and multilingual ontology of marine data and services. This will allow searches across both human languages and across scientific domains. The approach taken will be to analyse existing semantic resources and provide mappings between them, gluing together the definitions, semantics and workflows of the WxS services. The mappings between terms aim to be more general than the standard "narrower than", "broader than" type seen in the thesauri or simple ontologies implemented by previous programmes. Tools for the development and population of ontologoies will also be provided by NETMAR as there will be instances in which existing resources cannot sufficiently describe newly encountered data or services.
Full Text Available Building information modelling (BIM is defined as a process involving the generation and management of digital representation of physical and functional characteristics of a facility. The purpose of interoperability in integrated or “open” BIM is to facilitate the information exchange between different digital systems, models and tools. There has been effort towards data interoperability with development of open source standards and object-oriented models, such as industry foundation classes (IFC for vertical infrastructure. However, the lack of open data standards for the information exchange for horizontal infrastructure limits the adoption and effectiveness of integrated BIM. The paper outlines two interoperability issues for construction of rail infrastructure. The issues are presented in two case study reports, one from Australia and one from Malaysia. The each case study includes: a description of the project, the application of BIM in the project, a discussion of the promised BIM interoperability solution plus the identification of the unresolved lack of interoperability for horizontal infrastructure project management. The Moreton Bay Rail project in Australia introduces general software interoperability issues. The Light Rail Extension project in Kuala Lumpur outlines an example of the integration problems related to two different location data structures. The paper highlights how the continuing lack of data interoperability limits utilisation of integrated BIM for horizontal infrastructure rail projects.
Kroszynski, Uri; Sørensen, Torben; Ludwig, Arnold
Esprit Project 6457 "Interoperability of Standards for Robotics in CIME (InterRob)" belongs to the Subprogramme "Integration in Manufacturing" of Esprit, the European Specific Programme for Research and Development in Information Technology supported by the European Commision.The first main goal...... of InterRob was to close the information chain between product design, simulation, programming, and robot control by developing standardized interfaces and their software implementation for standards STEP (International Standard for the Exchange of Product model data, ISO 10303) and IRL (Industrial Robot...... Language, DIN 66312). This is a continuation of the previous Esprit projects CAD*I and NIRO, which developed substantial basics of STEP.The InterRob approach is based on standardized models for product geometry, kinematics, robotics, dynamics and control, hence on a coherent neutral information model...
Stevens, R; Miller, C
Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.
Avery Bingham; Javier Ortensi
Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.
Full Text Available Nowadays, business interoperability is one of the key factors for assuring competitive advantage for the participant business partners. In order to implement business cooperation, scalable, distributed and portable collaborative systems have to be implemented. This article presents some of the mostly used technologies in this field. Furthermore, it presents a software application architecture based on Business Process Modeling Notation standard and automated semantic web service coupling for modeling business flow in a collaborative manner. The main business processes will be represented in a single, hierarchic flow diagram. Each element of the diagram will represent calls to semantic web services. The business logic (the business rules and constraints will be structured with the help of OWL (Ontology Web Language. Moreover, OWL will also be used to create the semantic web service specifications.
Kyriazos, George K; Gerostathopoulos, Ilias Th; Kolias, Vassileios D; Stoitsis, John S; Nikita, Konstantina S
The need for annotating the continuously increasing volume of medical image data is recognized from medical experts for a variety of purposes, regardless if this is medical practice, research or education. The rich information content latent in medical images can be made explicit and formal with the use of well-defined ontologies. Evolution of the Semantic Web now offers a unique opportunity of a web-based, service-oriented approach. Remote access to FMA and ICD-10 reference ontologies provides the ontological annotation framework. The proposed system utilizes this infrastructure to provide a customizable and robust annotation procedure. It also provides an intelligent search mechanism indicating the advantages of semantic over keyword search. The common representation layer discussed facilitates interoperability between institutions and systems, while semantic content enables inference and knowledge integration.
Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich
Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems.
Full Text Available One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems. Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols. The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure.
Hitzler, Pascal; Rudolph, Sebastian
The Quest for Semantics Building Models Calculating with Knowledge Exchanging Information Semanic Web Technologies RESOURCE DESCRIPTION LANGUAGE (RDF)Simple Ontologies in RDF and RDF SchemaIntroduction to RDF Syntax for RDF Advanced Features Simple Ontologies in RDF Schema Encoding of Special Data Structures An ExampleRDF Formal Semantics Why Semantics? Model-Theoretic Semantics for RDF(S) Syntactic Reasoning with Deduction Rules The Semantic Limits of RDF(S)WEB ONTOLOGY LANGUAGE (OWL) Ontologies in OWL OWL Syntax and Intuitive Semantics OWL Species The Forthcoming OWL 2 StandardOWL Formal Sem
ZHANG Tong-zhen; SHEN Rui-min
A large semantic gap exists between content based index retrieval (CBIR) and high-level semantic, additional semantic information should be attached to the images, it refers in three respects including semantic representation model, semantic information building and semantic retrieval techniques. In this paper, we introduce an associated semantic network and an automatic semantic annotation system. In the system, a semantic network model is employed as the semantic representation model, it uses semantic keywords, linguistic ontology and low-level features in semantic similarity calculating. Through several times of users' relevance feedback, semantic network is enriched automatically. To speed up the growth of semantic network and get a balance annotation, semantic seeds and semantic loners are employed especially.
Di Martino, Beniamino; Esposito, Antonio
This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks
Blackstock, Michael; Lea, Rodger
Interoperability in the Internet of Things is critical for emerging services and applications. In this paper we advocate the use of IoT ‘hubs’ to aggregate things using web protocols, and suggest a staged approach to interoperability. In the context of a UK government funded project involving 8 IoT projects to address cross-domain IoT interoperability, we introduce the HyperCat IoT catalogue specification. We then describe the tools and techniques we developed to adapt an existing data portal...
Carneiro, Gustavo; Chan, Antoni B; Moreno, Pedro J; Vasconcelos, Nuno
A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning.
Full Text Available In this article, semantic models of gerund in the Lithuanian language are being investigated. Their productivity and the reasons of their change in the Lithuanian language are identified. The tendency to use gerund semantic structure in noun constructions is typical not only in Greek or Latin languages but also in English, Russian, etc. Regular polysemy is regarded as semantic derivation, i. e. shifting from main meanings to derivative ones. The object of this investigation is the usage patterns of gerunds which bear both the meaning of a verb and a noun. The examples for the present study have been gathered from the language of different Lithuanian dialects as well as from the Dictionary of the Lithuanian language (different volumes, etc. The research results reveal that semantic changes of object and result are the most productive, whereas mood or time semantic model proved to be not so productive. The productivity of regular models depends on the fact that there are suffix derivatives which have the meaning of a result. The research shows that scientific style and language of different dialects are rich in the use of gerund.
Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man
Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.
Full Text Available Abstract Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
Modeling Notation ( BPMN ) • Business Process Definition Metamodel (BPDM) A Business Process (BP) is a defined sequence of steps to be executed in...enterprise applications, to evaluate the capabilities of suppliers, and to compare against the competition. BPMN standardizes flowchart diagrams that
Pedersen, Rune; Wynn, Rolf; Ellingsen, Gunnar
This paper is a status report from a large-scale openEHR-based EPR project from the North Norway Regional Health Authority encouraged by the unfolding of a national repository for openEHR archetypes. Clinicians need to engage in, and be responsible for the production of archetypes. The consensus processes have so far been challenged by a low number of active clinicians, a lack of critical specialties to reach consensus, and a cumbersome review process (3 or 4 review rounds) for each archetype. The goal is to have several clinicians from each specialty as a backup if one is hampered to participate. Archetypes and their importance for structured data and sharing of information has to become more visible for the clinicians through more sharpened information practice.
Daniele, L.M.; Hartog, F.T.H. den; Roes, J.B.M.
About two thirds of the energy consumed in buildings originates from household appliances. Nowadays, appliances are often intelligent and networked devices that form complete energy consuming, producing, and managing systems. Reducing energy consumption is therefore a matter of managing and optimizi
Hartog, F.T.H. den; Daniele, L.M.; Roes, J.B.M.
About two thirds of the energy consumed in buildings originates household appliances. Nowadays, appliances are often intelligent and networked devices that form complete energy consuming, producing, and managing systems. Reducing energy is therefore a matter of managing and optimizing the energy uti
Full Text Available This paper describes the results of a collaborative effort that has reconciled the Open Annotation Collaboration (OAC ontology and the Annotation Ontology (AO to produce a merged data model [the Open Annotation (OA data model] to describe Web-based annotations—and hence facilitate the discovery, sharing and re-use of such annotations. Using a number of case studies that include digital scholarly editing, 3D museum artifacts and sensor data streams, we evaluate the OA model’s capabilities. We also describe our implementation of an online annotation server that supports the storage, search and retrieval of OA-compliant annotations across multiple applications and disciplines. Finally we discuss outstanding problem issues associated with the OA ontology, and the impact that certain design decisions have had on the efficient storage, indexing, search and retrieval of complex structured annotations.
Marchetti, Andrea; Ronzano, Francesco; Tesconi, Maurizio; Vossen, Piek; Agirre, Eneko; Bond, Francis; Bosma, Wauter; Herold, Axel; Hicks, Amanda; Hsieh, Shu-Kai; Isahara, Hitoshi; Huang, Chu-Ren; Kanzaki, Kyoko; Rigau, German; Segers, Roxane
KYOTO is an Asian-European project developing a community platform for modeling knowledge and finding facts across languages and cultures. The platform operates as a Wiki system that multilingual and multi-cultural communities can use to agree on the meaning of terms in specific domains. The Wiki is fed with terms that are automatically extracted from documents in different languages. The users can modify these terms and relate them across languages. The system generates complex, language-neu...
Daniele, L.M.; Hartog, F.T.H. den; Roes, J.B.M.
About two thirds of the energy consumed by buildings originates from the residential sectors and thus household appliances. Household appliances or home appliances are electrical/mechanical machines which accomplish some household functions. Nowadays, appliances are not stand-alone systems anymore.
distribution . Object Model Template ( OMT ) Specification – The OMT could be seen as a template for documenting information in HLA federations, i.e. it...grounding to the HLA OMT which enables mapping from conceptual model to component interface specification, where the latter may be expressed in terms of...the HLA OMT . The current version of BOM was standardized within SISO four years ago. Recently, a PDG (Product Development Group) of SISO released a
DiLauro, T.; Duerr, R.; Thessen, A. E.; Rippin, M.; Pralle, B.; Choudhury, G. S.
description, the DCS instance will be able to provide default mappings for the directories and files within the package payload and enable support for deposited content at a lower level of service. Internally, the DCS will map these hybrid package serializations to its own internal business objects and their properties. Thus, this approach is highly extensible, as other packaging formats could be mapped in a similar manner. In addition, this scheme supports establishing the fixity of the payload while still supporting update of the semantic overlay data. This allows a data producer with scarce resources or an archivist who acquires a researcher's data to package the data for deposit with the intention of augmenting the resource description in the future. The Data Conservancy is partnering with the Sustainable Environment Actionable Data project to test the interoperability of this new packaging mechanism.  Data Conservancy: http://dataconservancy.org/  BagIt: https://datatracker.ietf.org/doc/draft-kunze-bagit/  OAI-ORE: http://www.openarchives.org/ore/1.0/  SEAD: http://sead-data.net/
Ontology mappings are of critical importance for the Linked Data and the Semantic Web communities as they can help to mitigate the effects of heterogeneities, which are a major obstacle to the promise of interoperability of knowledge. To reduce creation costs and enable automated runtime integration, the description, discovery and most of all re-use of existing ontology mappings are needed. Meta-data can help to retrieve ontology mappings, to apply them and to m...
Torres, Diego; Skaf-Molli, Hala; Díaz, Alicia; Molli, Pascal
In this paper, we propose to extend Peer-to-Peer Semantic Wikis with personal semantic annotations. Semantic Wikis are one of the most successful Semantic Web applications. In semantic wikis, wikis pages are annotated with semantic data to facilitate the navigation, information retrieving and ontology emerging. Semantic data represents the shared knowledge base which describes the common understanding of the community. However, in a collaborative knowledge building process the knowledge is basically created by individuals who are involved in a social process. Therefore, it is fundamental to support personal knowledge building in a differentiated way. Currently there are no available semantic wikis that support both personal and shared understandings. In order to overcome this problem, we propose a P2P collaborative knowledge building process and extend semantic wikis with personal annotations facilities to express personal understanding. In this paper, we detail the personal semantic annotation model and show its implementation in P2P semantic wikis. We also detail an evaluation study which shows that personal annotations demand less cognitive efforts than semantic data and are very useful to enrich the shared knowledge base.
Van Valin, Jr., Robert D.
This paper argues that split-intransitive phenomena are better explained in semantic terms. A semantic analysis is carried out in Role and Reference Grammar, which assumes the theory of verb classification proposed in Dowty 1979. (49 references) (JL)
Hutchison, Keith A; Balota, David A; Neely, James H; Cortese, Michael J; Cohen-Shikora, Emily R; Tse, Chi-Shing; Yap, Melvin J; Bengson, Jesse J; Niemeyer, Dale; Buchanan, Erin
Speeded naming and lexical decision data for 1,661 target words following related and unrelated primes were collected from 768 subjects across four different universities. These behavioral measures have been integrated with demographic information for each subject and descriptive characteristics for every item. Subjects also completed portions of the Woodcock-Johnson reading battery, three attentional control tasks, and a circadian rhythm measure. These data are available at a user-friendly Internet-based repository ( http://spp.montana.edu ). This Web site includes a search engine designed to generate lists of prime-target pairs with specific characteristics (e.g., length, frequency, associative strength, latent semantic similarity, priming effect in standardized and raw reaction times). We illustrate the types of questions that can be addressed via the Semantic Priming Project. These data represent the largest behavioral database on semantic priming and are available to researchers to aid in selecting stimuli, testing theories, and reducing potential confounds in their studies.
Full Text Available This paper presets results from a review of the current standards used for collaboration between economic information systems, including web services and service oriented architecture, EDI, ebXML framework, RosettaNet framework, cXML, xCBL UBL, BPMN, BPEL, WS-CDL, ASN.1, and others. Standards have a key role in promoting economic information system interoperability, and thus enable collaboration. Analyzing the current standards, technologies and applications used for economic information systems interoperability has revealed a common pattern that runs through all of them. From this pattern we construct a basic model of interoperability around which we relate and judge all standards, technologies and applications for economic information systems interoperability.
ZHONG Ning; KUANG Jing-ming; HE Zun-wen
A novel interoperability test sequences optimization scheme is proposed in which the genetic algo-rithm(GA)is used to obtain the minimal-length interoperability test sequences.During our work,the basicin teroperability test sequences are generated based on the minimal-complete-coverage criterion,which removes the redundancy from conformance test sequences.Then interoperability sequences minimization problem can be considered as an instance of the set covering problem,and the GA is applied to remove redundancy in interoperability transitions.The results show that compared to conventional algorithm,the proposed algorithm is more practical to avoid the state space explosion problem,for it can reduce the length of the test sequences and maintain the same transition coverage.
Duque, Arantxa; Campos, Cristina; Jiménez-Ruiz, Ernesto; Chalmeta, Ricardo
Significant developments in information and communication technologies and challenging market conditions have forced enterprises to adapt their way of doing business. In this context, providing mechanisms to guarantee interoperability among heterogeneous organisations has become a critical issue. Even though prolific research has already been conducted in the area of enterprise interoperability, we have found that enterprises still struggle to introduce fully interoperable solutions, especially, in terms of the development and application of ontologies. Thus, the aim of this paper is to introduce basic ontology concepts in a simple manner and to explain the advantages of the use of ontologies to improve interoperability. We will also present a case study showing the implementation of an application ontology for an enterprise in the textile/clothing sector.
Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris
Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.
LI Dun; MA Yong-tao; GUO Jian-li
Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.
Nielson, Flemming; Nielson, Hanne Riis
Flow logic is a “fast prototyping” approach to program analysis that shows great promise of being able to deal with a wide variety of languages and calculi for computation. However, seemingly innocent choices in the flow logic as well as in the operational semantics may inhibit proving the analysis...... correct. Our main conclusion is that environment based semantics is more flexible than either substitution based semantics or semantics making use of structural congruences (like alpha-renaming)....
Levandoski, J J; Abdulla, G M
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Warren, S.; Craft, R.L.; Parks, R.C.; Gallagher, L.K.; Garcia, R.J.; Funkhouser, D.R.
Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.
Full Text Available To ensure secure content delivery, the Motion Picture Experts Group (MPEG has dedicated significant effort to the digital rights management (DRM issues. MPEG is now moving from defining only hooks to proprietary systems (e.g., in MPEG-2, MPEG-4 Version 1 to specifying a more encompassing standard in intellectual property management and protection (IPMP. MPEG feels that this is necessary in order to achieve MPEG's most important goal: interoperability. The design of the IPMP Extension framework also considers the complexity of the MPEG-4 standard and the diversity of its applications. This architecture leaves the details of the design of IPMP tools in the hands of applications developers, while ensuring the maximum flexibility and security. This paper first briefly describes the background of the development of the MPEG-4 IPMP Extension. It then presents an overview of the MPEG-4 IPMP Extension, including its architecture, the flexible protection signaling, and the secure messaging framework for the communication between the terminal and the tools. Two sample usage scenarios are also provided to illustrate how an MPEG-4 IPMP Extension compliant system works.
Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.; Garica, R.J.; Parks, R.C.; Warren, S.
We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.
Mendelson, David S; Erickson, Bradley J; Choy, Garry
Interoperability is a major focus of the quickly evolving world of Health IT. Easy, yet secure and confidential exchange of imaging exams and the associated reports must be a part of the solutions that are implemented. The availability of historical exams is essential in providing a quality interpretation and reducing inappropriate utilization of imaging services. Today, the exchange of imaging exams is most often achieved via a compact disc. We describe the virtues of this solution as well as challenges that have surfaced. Internet- and cloud-based technologies employed for many consumer services can provide a better solution. Vendors are making these solutions available. Standards for Internet-based exchange are emerging. Just as radiology converged on DICOM as a standard to store and view images, we need a common exchange standard. We will review the existing standards and how they are organized into useful workflows through Integrating the Healthcare Enterprise profiles. Integrating the Healthcare Enterprise and standards development processes are discussed. Health care and the domain of radiology must stay current with quickly evolving Internet standards. The successful use of the "cloud" will depend on both the technologies and the policies put into place around them, both of which we discuss. The radiology community must lead the way and provide a solution that works for radiologists and clinicians with use of the electronic medical record. We describe features we believe radiologists should consider when adding Internet-based exchange solutions to their practice.
DoD SBIR projects to develop a first-responder ICE Supervisor. TATRC BAA support has been instrumental in providing “program glue” to effectively...FDA has specifically asked AAMI (Association for the Advancement of Medical Instrumentation ) to pursue the development of interoperability...advances in mind. We also recognize that, as in all technological advances, interoperability poses safety and medico -legal challenges as well. The
Downs, R. R.; Chen, R. S.
Both the natural and social science data communities are attempting to address the long-term sustainability of their data infrastructures in rapidly changing research, technological, and policy environments. Many parts of these communities are also considering how to improve the interoperability and integration of their data and systems across natural, social, health, and other domains. However, these efforts have generally been undertaken in parallel, with little thought about how different sustainability approaches may impact long-term interoperability from scientific, legal, or economic perspectives, or vice versa, i.e., how improved interoperability could enhance—or threaten—infrastructure sustainability. Scientific progress depends substantially on the ability to learn from the legacy of previous work available for current and future scientists to study, often by integrating disparate data not previously assembled. Digital data are less likely than scientific publications to be usable in the future unless they are managed by science-oriented repositories that can support long-term data access with the documentation and services needed for future interoperability. We summarize recent discussions in the social and natural science communities on emerging approaches to sustainability and relevant interoperability activities, including efforts by the Belmont Forum E-Infrastructures project to address global change data infrastructure needs; the Group on Earth Observations to further implement data sharing and improve data management across diverse societal benefit areas; and the Research Data Alliance to develop legal interoperability principles and guidelines and to address challenges faced by domain repositories. We also examine emerging needs for data interoperability in the context of the post-2015 development agenda and the expected set of Sustainable Development Goals (SDGs), which set ambitious targets for sustainable development, poverty reduction, and
Küppers, Bernd-Olaf; Artmann, Stefan
Complex systems in nature and society make use of information for the development of their internal organization and the control of their functional mechanisms. Alongside technical aspects of storing, transmitting and processing information, the various semantic aspects of information, such as meaning, sense, reference and function, play a decisive part in the analysis of such systems.With the aim of fostering a better understanding of semantic systems from an evolutionary and multidisciplinary perspective, this volume collects contributions by philosophers and natural scientists, linguists, i
The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals.
Lenau, Torben Anker; Boelskifte, Per
a distinct character. For the technical properties there exists a well developed and commonly accepted terminology that can be utilised at product search and material selection (Ashby 1996). This is not the case for the semantic properties which are important for the outcome reflecting the product design...... processes. This working paper argues for the need for a commonly accepted terminology used to communicate semantic product properties. Designers and others involved in design processes are dependent of a sharp and clear verbal communication. Search facilities in computer programs for product and material...
Mørk, Simon; Godskesen, Jens Christian; Hansen, Michael Reichhardt
An alternative formal semantics for describing the temporal aspects for the ITU-T specification language SDL is proposed, based on the interval temporal logic Duration Calculus (DC). It is shown how DC can be used to give an SDL semantics with a precise treatment oftemporal phenomena. The semantics...
This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…
D. Borsboom; I. Visser
We argue that neural networks for semantic cognition, as proposed by Rogers & McClelland (R&M), do not acquire semantics and therefore cannot be the basis for a theory of semantic cognition. The reason is that the neural networks simply perform statistical categorization procedures, and these do not
Hanisch, Robert J.
The ISAIA project was originally proposed in 1999 as a successor to the informal AstroBrowse project. AstroBrowse, which provided a data location service for astronomical archives and catalogs, was a first step toward data system integration and interoperability. The goals of ISAIA were ambitious: '...To develop an interdisciplinary data location and integration service for space science. Building upon existing data services and communications protocols, this service will allow users to transparently query hundreds or thousands of WWW-based resources (catalogs, data, computational resources, bibliographic references, etc.) from a single interface. The service will collect responses from various resources and integrate them in a seamless fashion for display and manipulation by the user.' Funding was approved only for a one-year pilot study, a decision that in retrospect was wise given the rapid changes in information technology in the past few years and the emergence of the Virtual Observatory initiatives in the US and worldwide. Indeed, the ISAIA pilot study was influential in shaping the science goals, system design, metadata standards, and technology choices for the virtual observatory. The ISAIA pilot project also helped to cement working relationships among the NASA data centers, US ground-based observatories, and international data centers. The ISAIA project was formed as a collaborative effort between thirteen institutions that provided data to astronomers, space physicists, and planetary scientists. Among the fruits we ultimately hoped would come from this project would be a central site on the Web that any space scientist could use to efficiently locate existing data relevant to a particular scientific question. Furthermore, we hoped that the needed technology would be general enough to allow smaller, more-focused community within space science could use the same technologies and standards to provide more specialized services. A major challenge to searching
Tomas, Robert; Lutz, Michael
The well-known heterogeneity and fragmentation of data models, formats and controlled vocabularies of environmental data limit potential data users from utilising the wealth of environmental information available today across Europe. The main aim of INSPIRE1 is to improve this situation and give users possibility to access, use and correctly interpret environmental data. Over the past years number of INSPIRE technical guidelines (TG) and implementing rules (IR) for interoperability have been developed, involving hundreds of domain experts from across Europe. The data interoperability specifications, which have been developed for all 34 INSPIRE spatial data themes2, are the central component of the TG and IR. Several of these themes are related to the earth sciences, e.g. geology (including hydrogeology, geophysics and geomorphology), mineral and energy resources, soil science, natural hazards, meteorology, oceanography, hydrology and land cover. The following main pillars for data interoperability and harmonisation have been identified during the development of the specifications: Conceptual data models describe the spatial objects and their properties and relationships for the different spatial data themes. To achieve cross-domain harmonization, the data models for all themes are based on a common modelling framework (the INSPIRE Generic Conceptual Model3) and managed in a common UML repository. Harmonised vocabularies (or code lists) are to be used in data exchange in order to overcome interoperability issues caused by heterogeneous free-text and/or multi-lingual content. Since a mapping to a harmonized vocabulary could be difficult, the INSPIRE data models typically allow the provision of more specific terms from local vocabularies in addition to the harmonized terms - utilizing either the extensibility options or additional terminological attributes. Encoding. Currently, specific XML profiles of the Geography Markup Language (GML) are promoted as the standard
Wu, Zhenyu; Xu, Yuan; Yang, Yunong; Zhang, Chunhong; Zhu, Xinning; Ji, Yang
Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed. PMID:28230725
Wu, Zhenyu; Xu, Yuan; Yang, Yunong; Zhang, Chunhong; Zhu, Xinning; Ji, Yang
Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed.
Wu, Lei; Hoi, Steven C H; Yu, Nenghai
The Bag-of-Words (BoW) model is a promising image representation technique for image categorization and annotation tasks. One critical limitation of existing BoW models is that much semantic information is lost during the codebook generation process, an important step of BoW. This is because the codebook generated by BoW is often obtained via building the codebook simply by clustering visual features in Euclidian space. However, visual features related to the same semantics may not distribute in clusters in the Euclidian space, which is primarily due to the semantic gap between low-level features and high-level semantics. In this paper, we propose a novel scheme to learn optimized BoW models, which aims to map semantically related features to the same visual words. In particular, we consider the distance between semantically identical features as a measurement of the semantic gap, and attempt to learn an optimized codebook by minimizing this gap, aiming to achieve the minimal loss of the semantics. We refer to such kind of novel codebook as semantics-preserving codebook (SPC) and the corresponding model as the Semantics-Preserving Bag-of-Words (SPBoW) model. Extensive experiments on image annotation and object detection tasks with public testbeds from MIT's Labelme and PASCAL VOC challenge databases show that the proposed SPC learning scheme is effective for optimizing the codebook generation process, and the SPBoW model is able to greatly enhance the performance of the existing BoW model.
Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G
More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference.
Denhière, Guy; Bellissens, Cédrick; Jhean, Sandra
The goal of this paper is to present a model of children's semantic memory, which is based on a corpus reproducing the kinds of texts children are exposed to. After presenting the literature in the development of the semantic memory, a preliminary French corpus of 3.2 million words is described. Similarities in the resulting semantic space are compared to human data on four tests: association norms, vocabulary test, semantic judgments and memory tasks. A second corpus is described, which is composed of subcorpora corresponding to various ages. This stratified corpus is intended as a basis for developmental studies. Finally, two applications of these models of semantic memory are presented: the first one aims at tracing the development of semantic similarities paragraph by paragraph; the second one describes an implementation of a model of text comprehension derived from the Construction-integration model (Kintsch, 1988, 1998) and based on such models of semantic memory.
Described and exemplified a semantic scoring system of students' on-line English-Chinese translation.To achieve accurate assessment, the system adopted a comprehensive method which combines semantic scoring with keyword matching scoring. Four kinds of words-verbs, adjectives, adverbs and "the rest" including nouns, pronouns, idioms, prepositions, etc. , are identified after parsing. The system treats different words tagged with different part of speech differently. Then it calculated the semantic similarity between these words of the standard versions and those of students'translations by the distinctive differences of the semantic features of these words with the aid of HowNet. The first semantic feature of verbs and the last semantic features of adjectives and adverbs are calculated. "The rest" is scored by keyword matching. The experiment results show that the semantic scoring system is applicable in fulfilling the task of scoring students'on-line English-Chinese translations.
Bagnasco, S; Buncic, P; Carminati, F; Cerello, P G; Saiz, P
AliEn (ALICE Environment) is a GRID-like system for large scale job submission and distributed data management developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. With the aim of exploiting upcoming Grid resources to run AliEn-managed jobs and store the produced data, the problem of AliEn-EDG interoperability was addressed and an in-terface was designed. One or more EDG (European Data Grid) User Interface machines run the AliEn software suite (Cluster Monitor, Storage Element and Computing Element), and act as interface nodes between the systems. An EDG Resource Broker is seen by the AliEn server as a single Computing Element, while the EDG storage is seen by AliEn as a single, large Storage Element; files produced in EDG sites are registered in both the EDG Replica Catalogue and in the AliEn Data Catalogue, thus ensuring accessibility from both worlds. In fact, both registrations are required: the AliEn one is used for the data management, the EDG one to guarantee the integrity and...
Bower, Ward Isaac [Ward Bower Innovations, LLC, Albuquerque, NM (United Staes); Ton, Dan T. [U.S. Dept. of Energy, Washington, DC (United States); Guttromson, Ross [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Glover, Steven F [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stamp, Jason Edwin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bhatnagar, Dhruv [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reilly, Jim [Reily Associates, Pittston, PA (United States)
This white paper focuses on "advanced microgrids," but sections do, out of necessity, reference today's commercially available systems and installations in order to clearly distinguish the differences and advances. Advanced microgrids have been identified as being a necessary part of the modern electrical grid through a two DOE microgrid workshops, the National Institute of Standards and Technology, Smart Grid Interoperability Panel and other related sources. With their grid-interconnectivity advantages, advanced microgrids will improve system energy efficiency and reliability and provide enabling technologies for grid-independence to end-user sites. One popular definition that has been evolved and is used in multiple references is that a microgrid is a group of interconnected loads and distributed-energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode. Further, an advanced microgrid can then be loosely defined as a dynamic microgrid.
Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean
For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.
Smirnova, O; Cameron, D; Ellert, M; Groenager, M; Johansson, D; Kleist, J [NDGF, Kastruplundsgade 22, DK-2770 Kastrup (Denmark); Dobe, P; Joenemo, J; Konya, B [Lund University, Experimental High Energy Physics, Institute of Physics, Box 118, SE-22100 Lund (Sweden); Fraagaat, T; Konstantinov, A; Nilsen, J K; Saada, F Ould; Qiang, W; Read, A [University of Oslo, Department of Physics, P. O. Box 1048, Blindern, N-0316 Oslo (Norway); Kocan, M [Pavol Jozef Safarik University, Faculty of Science, Jesenna 5, SK-04000 Kosice (Slovakia); Marton, I; Nagy, Zs [NIIF/HUNGARNET, Victor Hugo 18-22, H-1132 Budapest (Hungary); Moeller, S [University of Luebeck, Inst. Of Neuro- and Bioinformatics, Ratzeburger Allee 160, D-23538 Luebeck (Germany); Mohn, B, E-mail: email@example.com [Uppsala University, Department of Physics and Astronomy, Div. of Nuclear and Particle Physics, Box 535, SE-75121 Uppsala (Sweden)
The Advanced Resource Connector (ARC) middleware introduced by NorduGrid is one of the basic Grid solutions used by scientists worldwide. While being well-proven in daily use by a wide variety of scientific applications at large-scale infrastructures like the Nordic DataGrid Facility (NDGF) and smaller scale projects, production ARC of today is still largely based on conventional Grid technologies and custom interfaces introduced a decade ago. In order to guarantee sustainability, true cross-system portability and standards-compliance based interoperability, the ARC community undertakes a massive effort of implementing modular Web Service (WS) approach into the middleware. With support from the EU KnowARC project, new components were introduced and the existing key ARC services got extended with WS technology based standard-compliant interfaces following a service-oriented architecture. Such components include the hosting environment framework, the resource-coupled execution service, the re-engineered client library, the self-healing storage solution and the peer-to-peer information system, to name a few. Gradual introduction of these new services and client tools into the production middleware releases is carried out together with NDGF and thus ensures a smooth transition to the next generation Grid middleware. Standard interfaces and modularity of the new component design are essential for ARC contributions to the planned Universal Middleware Distribution of the European Grid Initiative.
occurring relations. AeroText and consequently AeroDAML can be tailored to particular domains through training sessions with annotated corpuses...the complexities of semantic markup by using mnemonic names for URIs, hiding unnamed intermediate objects (represented by “ GenSym ” identifiers), and
E. Meij; M. Bron; L. Hollink; B. Huurnink; M. de Rijke
An important application of semantic web technology is recognizing human-defined concepts in text. Query transformation is a strategy often used in search engines to derive queries that are able to return more useful search results than the original query and most popular search engines provide faci
M. M. El-gayar
Full Text Available The amount of information raises billions of databases every year and there is an urgent need to search for that information by a specialize tool called search engine. There are many of search engines available today, but the main challenge in these search engines is that most of them cannot retrieve meaningful information intelligently. The semantic web technology is a solution that keeps data in a readable format that helps machines to match smartly this data with related information based on meanings. In this paper, we will introduce a proposed semantic framework that includes four phases crawling, indexing, ranking and retrieval phase. This semantic framework operates over a sorting RDF by using efficient proposed ranking algorithm and enhanced crawling algorithm. The enhanced crawling algorithm crawls relevant forum content from the web with minimal overhead. The proposed ranking algorithm is produced to order and evaluate similar meaningful data in order to make the retrieval process becomes faster, easier and more accurate. We applied our work on a standard database and achieved 99 percent effectiveness on semantic performance in minimum time and less than 1 percent error rate compared with the other semantic systems.
Wei Hu; Yu-Zhong Qu; Xing-Zhi Sun
An object on the Semantic Web is likely to be denoted with several URIs by different parties.Object coreferencing is a process to identify "equivalent" URIs of objects for achieving a better Data Web.In this paper,we propose a bootstrapping approach for object coreferencing on the Semantic Web.For an object URI,we firstly establish a kernel that consists of semantically equivalent URIs from the same-as,(inverse) functional properties and (max-)cardinalities,and then extend the kernel with respect to the textual descriptions (e.g.,labels and local names) of URIs.We also propose a trustworthiness-based method to rank the coreferent URIs in the kernel as well as a similarity-based method for ranking the URIs in the extension of the kernel.We implement the proposed approach,called ObjectCoref,on a large-scale dataset that contains 76 million URIs collected by the Falcons search engine until 2008.The evaluation on precision,relative recall and response time demonstrates the feasibility of our approach.Additionally,we apply the proposed approach to investigate the popularity of the URI alias phenomenon on the current Semantic Web.
Smith, P., II
Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to Earth
Thomas, R.; Lowry, R. K.; Kokkinaki, A.
. Having placed Linked Data tooling over a single SPARQL end point the obvious future development for this system is to support semantic interoperability outside NVS by the incorporation of federated SPARQL end points in the USA and Australia during the ODIP II project. 1https://vocab.nerc.ac.uk/sparql 2 https://www.bodc.ac.uk/data/codes_and_formats/vocabulary_search/
Full Text Available The Web Services paradigm promises to enable rich flexible and dynamic interoperation of highly distributed, heterogeneous network enabled services. The idea of Web Services Mining that it makes use of the findings in the field of data mining and applies them to the world of Web Services. The emerging concept of Semantic Web Services aims at more sophisticated Web Services technologies: on basis of Semantic Description Frameworks, Intelligent mechanisms are envisioned for Discovery, Composition, and contracting of Web Services. The aim of semantic web is not only to support to access information on the web but also to support its usage. Geospatial Semantic Web is an augmentation to the Semantic Web that adds geospatial abstractions, as well as related reasoning, representation and query mechanisms. Web Service Security represents a key requirement for today’s distributed interconnected digital world and for the new generations, Web 2.0 and Semantic Web. To date, the problem of security has been investigated very much in the context of standardization efforts; Personal judgments are made usually based on the sensitivity of the information and the reputation of the party to which the information is to be disclosed. On the privacy front, this means that privacy invasion would net more quality and sensitive personal information. In this paper, we had implemented a case study on integrated privacy issues of Spatial Semantic Web Services Mining. Initially we improved privacy of Geospatial Semantic Layer. Finally, we implemented a Location Based System and improved its digital signature capability, using advanced Digital Signature standards.
Full Text Available The Web Services paradigm promises to enable rich flexible and dynamic interoperation of highly distributed, heterogeneous network enabled services. The idea of Web Services Mining that it makes use of the findings in the field of data mining and applies them to the world of Web Services. The emerging concept of Semantic Web Services aims at more sophisticated Web Services technologies: on basis of Semantic Description Frameworks, Intelligent mechanisms are envisioned for Discovery, Composition, and contracting of Web Services. The aim of semantic web is not only to support to access information on the web but also to support its usage. Geospatial Semantic Web is an augmentation to the Semantic Web that adds geospatial abstractions, as well as related reasoning, representation and query mechanisms. Web Service Security represents a key requirement for today’s distributed interconnected digital world and for the new generations, Web 2.0 and Semantic Web. To date, the problem of security has been investigated very much in the context of standardization efforts; Personal judgments are made usually based on the sensitivity of the information and the reputation of the party towhich the information is to be disclosed. On the privacy front,this means that privacy invasion would net more quality and sensitive personal information. In this paper, we had implemented a case study on integrated privacy issues of Spatial Semantic Web Services Mining. Initially we improved privacy of Geospatial Semantic Layer. Finally, we implemented a Location Based System and improved its digital signature capability, using advanced Digital Signature standards.
Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei
Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.
Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1--6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.
The European GEO SIF has been initiated by the GIGAS project in an effort to better coordinate European requirements for GEO and GEOSS related activities, and is recognised by GEO as a regional SIF. To help advance the interoperability goals of the Global Earth Observing System of Systems (GEOSS), the Group on Earth Observations (GEO) Architecture and Data Committee (ADC) has established a Standards and Interoperability Forum (SIF) to support GEO organizations offering components and services to GEOSS. The SIF will help GEOSS contributors understand how to work with the GEOSS interoperability guidelines and how to enter their "interoperability arrangements" (standards or other ad hoc arrangements for interoperability) into the GEOSS registries. This will greatly facilitate the utility of GEOSS and encourage significant increase in participation. To carry out its work most effectively, the SIF promotes to form Regional Teams. They will help to organize and optimize the support coming from the different parts of the World and reach out regional and multi-disciplinary Scientific Communities. This will allow to have true global representation in supporting GEOSS interoperability. A SIF European Team is foreseen. The main role of the SIF is facilitating interoperability and working with members and participating organizations as they offer data and information services to the users of GEOSS. In this framework, the purpose of having a European Regional Team is to increase efficiency in carrying out the work of the SIF. Experts can join the SIF European Team by registering at the SIF European Team wiki site: http://www.thegigasforum.eu/sif/
Loescher, H.; Fundamental Instrument Unit
, GEO-BON, NutNet, etc.) and domestically, (e.g., NSF-CZO, USDA-LTAR, DOE-NGEE, Soil Carbon Network, etc.), there is a strong and mutual desire to assure interoperability of data. Developing interoperability is the degree by which each of the following is mapped between observatories (entities), defined by linking i) science requirements with science questions, ii) traceability of measurements to nationally and internationally accepted standards, iii) how data product are derived, i.e., algorithms, procedures, and methods, and iv) the bioinformatics which broadly include data formats, metadata, controlled vocabularies, and semantics. Here, we explore the rationale and focus areas for interoperability, the governance and work structures, example projects (NSF-NEON, EU-ICOS, and AU-TERN), and the emergent roles of scientists in these endeavors.
Schildhauer, M.; Bermudez, L. E.; Bowers, S.; Dibner, P. C.; Gries, C.; Jones, M. B.; McGuinness, D. L.; Cao, H.; Cox, S. J.; Kelling, S.; Lagoze, C.; Lapp, H.; Madin, J.
Research in the environmental sciences often requires accessing diverse data, collected by numerous data providers over varying spatiotemporal scales, incorporating specialized measurements from a range of instruments. These measurements are typically documented using idiosyncratic, disciplinary specific terms, and stored in management systems ranging from desktop spreadsheets to the Cloud, where the information is often further decomposed or stylized in unpredictable ways. This situation creates major informatics challenges for broadly discovering, interpreting, and merging the data necessary for integrative earth science research. A number of scientific disciplines have recognized these issues, and been developing semantically enhanced data storage frameworks, typically based on ontologies, to enable communities to better circumscribe and clarify the content of data objects within their domain of practice. There is concern, however, that cross-domain compatibility of these semantic solutions could become problematic. We describe here our efforts to address this issue by developing a core, unified Observational Data Model, that should greatly facilitate interoperability among the semantic solutions growing organically within diverse scientific domains. Observational Data Models have emerged independently from several distinct scientific communities, including the biodiversity sciences, ecology, evolution, geospatial sciences, and hydrology, to name a few. Informatics projects striving for data integration within each of these domains had converged on identifying "observations" and "measurements" as fundamental abstractions that provide useful "templates" through which scientific data can be linked— at the structural, composited, or even cell value levels— to domain terms stored in ontologies or other forms of controlled vocabularies. The Scientific Observations Network, SONet (http://sonet.ecoinformatics.org) brings together a number of these observational
Amunts, K; Hawrylycz, M J; Van Essen, D C; Van Horn, J D; Harel, N; Poline, J-B; De Martino, F; Bjaalie, J G; Dehaene-Lambertz, G; Dehaene, S; Valdes-Sosa, P; Thirion, B; Zilles, K; Hill, S L; Abrams, M B; Tass, P A; Vanduffel, W; Evans, A C; Eickhoff, S B
The last two decades have seen an unprecedented development of human brain mapping approaches at various spatial and temporal scales. Together, these have provided a large fundus of information on many different aspects of the human brain including micro- and macrostructural segregation, regional specialization of function, connectivity, and temporal dynamics. Atlases are central in order to integrate such diverse information in a topographically meaningful way. It is noteworthy, that the brain mapping field has been developed along several major lines such as structure vs. function, postmortem vs. in vivo, individual features of the brain vs. population-based aspects, or slow vs. fast dynamics. In order to understand human brain organization, however, it seems inevitable that these different lines are integrated and combined into a multimodal human brain model. To this aim, we held a workshop to determine the constraints of a multi-modal human brain model that are needed to enable (i) an integration of different spatial and temporal scales and data modalities into a common reference system, and (ii) efficient data exchange and analysis. As detailed in this report, to arrive at fully interoperable atlases of the human brain will still require much work at the frontiers of data acquisition, analysis, and representation. Among them, the latter may provide the most challenging task, in particular when it comes to representing features of vastly different scales of space, time and abstraction. The potential benefits of such endeavor, however, clearly outweigh the problems, as only such kind of multi-modal human brain atlas may provide a starting point from which the complex relationships between structure, function, and connectivity may be explored.
Bermudez, L. E.
Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and
Liu, Zhan; Le Calvé, Anne; Cretton, Fabian; Evéquoz, Florian; Mugellini, Elena
Nowadays, the increasing complexity in government-related processes presents considerable challenges to achieve satisfactory services. In this study we will discuss the first results obtained in an ongoing project named “e-Government Innovation Center”. We first identify typical semantic problems in e-Government, then introduce an e-Government semantic business process management framework and illustrate the building permit application as an example to strengthen how semantic web technologies...
Full Text Available In 2020 more than50 billions devices will be connected over the Internet. Every device will be connected to anything, anyone, anytime and anywhere in the world of Internet of Thing or IoT. This network will generate tremendous unstructured or semi structured data that should be shared between different devices/machines for advanced and automated service delivery in the benefits of the user’s daily life. Thus, mechanisms for data interoperability and automatic service discovery and delivery should be offered. Although many approaches have been suggested in the state of art, none of these researches provide a fully interoperable, light, flexible and modular Sensing/Actuating as service architecture. Therefore, this paper introduces a new Semantic Multi Agent architecture named OntoSmart for IoT data and service management through service oriented paradigm. It proposes sensors/actuators and scenarios independent flexible context aware and distributed architecture for IoT systems, in particular smart home systems.
Full Text Available The challenges associated with developing accurate models for cyber-physical systems are attributable to the intrinsic concurrent and heterogeneous computations of these systems. Even though reasoning based on interconnected domain specific ontologies shows promise in enhancing modularity and joint functionality modelling, it has become necessary to build interoperable cyber-physical systems due to the growing pervasiveness of these systems. In this paper, we propose a semantically oriented distributed reasoning architecture for cyber-physical systems. This model accomplishes reasoning through a combination of heterogeneous models of computation. Using the flexibility of semantic agents as a formal representation for heterogeneous computational platforms, we define autonomous and intelligent agent-based reasoning procedure for distributed cyber-physical systems. Sensor networks underpin the semantic capabilities of this architecture, and semantic reasoning based on Markov logic networks is adopted to address uncertainty in modelling. To illustrate feasibility of this approach, we present a Markov logic based semantic event model for cyber-physical systems and discuss a case study of event handling and processing in a smart home.
Romero-Hernández, David; 10.4204/EPTCS.62.4
We continue with the task of obtaining a unifying view of process semantics by considering in this case the logical characterization of the semantics. We start by considering the classic linear time-branching time spectrum developed by R.J. van Glabbeek. He provided a logical characterization of most of the semantics in his spectrum but, without following a unique pattern. In this paper, we present a uniform logical characterization of all the semantics in the enlarged spectrum. The common structure of the formulas that constitute all the corresponding logics gives us a much clearer picture of the spectrum, clarifying the relations between the different semantics, and allows us to develop generic proofs of some general properties of the semantics.
Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki
In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.
Walter Priesnitz Filho
Full Text Available Achieving interoperability, i.e. creating identity federations between different Electronic identities (eID systems, has gained relevance throughout the past years. A serious problem of identity federations is the missing harmonization between various attribute providers (APs. In closed eID systems, ontologies allow a higher degree of automation in the process of aligning and aggregating attributes from different APs. This approach does not work for identity federations, as each eID system uses its own ontology to represent its attributes. Furthermore, providing attributes to intermediate entities required to align and aggregate attributes potentially violates privacy rules. To tackle these problems, we propose the use of combined ontology-alignment (OA approaches and locality-sensitive hashing (LSH functions. We assess existing implementations of these concepts defining and using criteria that are special for identity federations. Obtained results confirm that proper implementations of these concepts exist and that they can be used to achieve interoperability between eID systems on attribute level. A prototype is implemented showing that combining the two assessment winners (AlignAPI for ontology-alignment and Nilsimsa for LSH functions achieves interoperability between eID systems. In addition, the improvement obtained in the alignment process by combining the two assessment winners does not impact negatively the privacy of the user’s data, since no clear-text data is exchanged in the alignment process.
We re-examine the challenges concerning causality in the semantics of Esterel and show that they pertain to the known issues in the semantics of Structured Operational Semantics with negative premises. We show that the solutions offered for the semantics of SOS also provide answers to the semantic challenges of Esterel and that they satisfy the intuitive requirements set by the language designers.
Ejstrup, Michael; le Fevre Jakobsen, Bjarne
Semantic gaps are dangerous Language adapts to the environment where it serves as a tool to communication. Language is a social agreement, and we all have to stick to both grammaticalized and non-grammaticalized rules in order to pass information about the world around us. As such language develops...... unpolite language and tend to create dangerous relations where specialy language creates problems and trouble that could be avoided if we had better language tools at hand. But we have not these tools of communication, and we are in a situation today where media and specially digital and social media......, supported by new possibilities of migration, create dangerous situations. How can we avoid these accidental gaps in language and specially the gaps in semantic and metaphoric tools. Do we have to keep silent and stop discusing certain isues, or do we have other ways to get acces to sufficient language tools...
Teixeira, G. M.; Aguiar, M. S. F.; Carvalho, C. F.; Dantas, D. R.; Cunha, M. V.; Morais, J. H. M.; Pereira, H. B. B.; Miranda, J. G. V.
Verbal language is a dynamic mental process. Ideas emerge by means of the selection of words from subjective and individual characteristics throughout the oral discourse. The goal of this work is to characterize the complex network of word associations that emerge from an oral discourse from a discourse topic. Because of that, concepts of associative incidence and fidelity have been elaborated and represented the probability of occurrence of pairs of words in the same sentence in the whole oral discourse. Semantic network of words associations were constructed, where the words are represented as nodes and the edges are created when the incidence-fidelity index between pairs of words exceeds a numerical limit (0.001). Twelve oral discourses were studied. The networks generated from these oral discourses present a typical behavior of complex networks and their indices were calculated and their topologies characterized. The indices of these networks obtained from each incidence-fidelity limit exhibit a critical value in which the semantic network has maximum conceptual information and minimum residual associations. Semantic networks generated by this incidence-fidelity limit depict a pattern of hierarchical classes that represent the different contexts used in the oral discourse.
Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura
In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial
European Virtual Observatory (VO) activities have been coordinated by a series of projects funded by the European Commission. Three pillar were identified: support to the data providers for implementation of their data in the VO framework; support to the astronomical community for their usage of VO-enabled data and tools; technological work for updating the VO framework of interoperability standards and tools. A new phase is beginning with the ASTERICS cluster project. ASTERICS Work Package "Data Access, Discovery and Interoperability" aims at making the data from the ESFRI projects and their pathfinders available for discovery and usage, interoperable in the VO framework and accessible with VO-enabled common tools. VO teams and representatives of ESFRI and pathfinder projects and of EGO/VIRGO are engaged together in the Work Package. ESO is associated to the project which is also working closely with ESA. The three pillars identified for coordinating Europaen VO activities are tackled.
Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas
EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development)  is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and
Heene, M.; Buesselberg, T.; Schroeder, D.; Brotzer, A.; Nativi, S.
The following poster highlights the operational interoperability challenges on the example of Global Earth Observation System of Systems (GEOSS) and World Meteorological Organization Information System (WIS). At the heart of both systems is a catalogue of earth observation data, products and services but with different metadata management concepts. While in WIS a strong governance with an own metadata profile for the hundreds of thousands metadata records exists, GEOSS adopted a more open approach for the ten million records. Furthermore, the development of WIS - as an operational system - follows a roadmap with committed downwards compatibility while the GEOSS development process is more agile. The poster discusses how the interoperability can be reached for the different metadata management concepts and how a proxy concept helps to couple two different systems which follow a different development methodology. Furthermore, the poster highlights the importance of monitoring and backup concepts as a verification method for operational interoperability.
Li Xitong; Fan Yushun; Huang Shuangxi
With the prevalence of service-oriented architecture (SOA), web services have become the dominating technology to construct workflow systems. As a workflow is the composition of a series of interrelated web services which realize its activities, the interoperability of workflows can be treated as the composition of web services. To address it, a framework for interoperability of business process execution language (BPEL)-based workflows is presented, which can perform three phases, that is, transformation, conformance test and execution. The core components of the framework are proposed, especially how these components promote interoperability. In particular, dynamic binding and re-composition of workflows in terms of web service testing are presented. Besides, an example of business-to-business (B2B) collaboration is provided to illustrate how to perform composition and conformance test.
Yang, Chao; Chen, Nengcheng; Di, Liping
Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.
Martinez, I; Del Valle, P; Munoz, P; Trigo, J D; Escayola, J; Martínez-Espronceda, M; Muñoz, A; Serrano, L; Garcia, J
The new paradigm of e-Health demands open sensors and middleware components that permit transparent integration and end-to-end interoperability of new personal health devices. The use of standards seems to be the internationally adopted way to solve these problems. This paper presents the implementation of an end-to-end standards-based e-Health solution. This includes ISO/IEEE11073 standard for the interoperability of the medical devices in the patient environment and EN13606 standard for the interoperable exchange of the Electronic Healthcare Record. The design strictly fulfills all the technical features of the most recent versions of both standards. The implemented prototype has been tested in a laboratory environment to demonstrate its feasibility for its further transfer to the healthcare system.
The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.
The Semantic Grid is an extension of the current Grid in which information and services are given well defined and explicitly represented meaning, better enabling computers and people to work in cooperation. In the last few years, several projects have embraced this vision and there are already successful pioneering applications that combine the strengths of the Grid and of semantic technologies. However, the Semantic Grid currently lacks a reference architecture, or a systematic approach for...
Spyrou, Evaggelos; Mylonas, Phivos
Broad in scope, Semantic Multimedia Analysis and Processing provides a complete reference of techniques, algorithms, and solutions for the design and the implementation of contemporary multimedia systems. Offering a balanced, global look at the latest advances in semantic indexing, retrieval, analysis, and processing of multimedia, the book features the contributions of renowned researchers from around the world. Its contents are based on four fundamental thematic pillars: 1) information and content retrieval, 2) semantic knowledge exploitation paradigms, 3) multimedia personalization, and 4)
Malo, Pekka; Ahlgren, Oskar; Wallenius, Jyrki; Korhonen, Pekka
The use of domain knowledge is generally found to improve query efficiency in content filtering applications. In particular, tangible benefits have been achieved when using knowledge-based approaches within more specialized fields, such as medical free texts or legal documents. However, the problem is that sources of domain knowledge are time-consuming to build and equally costly to maintain. As a potential remedy, recent studies on Wikipedia suggest that this large body of socially constructed knowledge can be effectively harnessed to provide not only facts but also accurate information about semantic concept-similarities. This paper describes a framework for document filtering, where Wikipedia's concept-relatedness information is combined with a domain ontology to produce semantic content classifiers. The approach is evaluated using Reuters RCV1 corpus and TREC-11 filtering task definitions. In a comparative study, the approach shows robust performance and appears to outperform content classifiers based on ...
Pierantoni, Gabriele; Carley, Eoin P.
Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Smart Grid Interoperability Standards; Supplemental Notice of Technical... Technical Conference on Smart Grid Interoperability Standards will be held on Monday, January 31,...
... Energy Regulatory Commission Smart Grid Interoperability Standards; Notice of Technical Conference... regulatory authorities that also are considering the adoption of Smart Grid Interoperability Standards.../FERC Collaborative on Smart Response (Collaborative), in the International D Ballroom at the Omni...
Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)
Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.
Elena N. Tsay
Full Text Available In the article concept as one of the principle notions of cognitive linguistics is investigated. Considering concept as culture phenomenon, having language realization and ethnocultural peculiarities, the description of the concept “happiness” is presented. Lexical and semantic paradigm of the concept of happiness correlates with a great number of lexical and semantic variants. In the work semantic representatives of the concept of happiness, covering supreme spiritual values are revealed and semantic interpretation of their functioning in the Biblical discourse is given.
Full Text Available System semantics of explanatory dictionaries Some semantic properties of the language to be followed from the structure of lexicographical systems of big explanatory dictionaries are considered. The hyperchains and hypercycles are determined as the definite kind of automorphisms of the lexicographical system of explanatory dictionary. Some semantic consequencies following from the principles of lexicographic closure and lexicographic completeness are investigated using the hyperchains and hypercycles formalism. The connection between the hypercyle properties of the lexicographical system semantics and Goedel’s incompleteness theorem is discussed.
Pederson, Rune; Ellingsen, Gunnar
The use of openEHR archetypes increases the interoperability of clinical terminology, and in doing so improves upon the availability of clinical terminology for both primary and secondary purposes. Where clinical terminology is employed in the EPR system, research reports conflicting a results for the use of structuring and standardization as measurements of success. In order to elucidate this concept, this paper focuses on the effort to establish a national repository for openEHR based archetypes in Norway where clinical terminology could be included with benefit for interoperability three folded.
Junaid Rashid; Muhammad Wasif Nisar
Semantic search engines(SSE) are more efficient than other web engines because in this era of busy life everyone wants an exact answer to his question which only semantic engines can provide. The immense increase in the volume of data, traditional search engines has increased the number of answers to satisfy the user. This creates the problem to search for the desired answer. To solve this problem, the trend of developing semantic search engines is increasing day by da...
Full Text Available Improved interoperability between public and private organizations is of key significance to make digitalgovernment newest triumphant. Digital Government interoperability, information sharing protocol andsecurity are measured the key issue for achieving a refined stage of digital government. Flawlessinteroperability is essential to share the information between diverse and merely dispersed organisationsin several network environments by using computer based tools. Digital government must ensure securityfor its information systems, including computers and networks for providing better service to the citizens.Governments around the world are increasingly revolving to information sharing and integration forsolving problems in programs and policy areas. Evils of global worry such as syndrome discovery andmanage, terror campaign, immigration and border control, prohibited drug trafficking, and more demandinformation sharing, harmonization and cooperation amid government agencies within a country andacross national borders. A number of daunting challenges survive to the progress of an efficientinformation sharing protocol. A secure and trusted information-sharing protocol is required to enableusers to interact and share information easily and perfectly across many diverse networks and databasesglobally. This article presents (1 literature review of digital government security and interoperabilityand, (2 key research issue trust based information sharing protocol for seamless interoperability amongdiverse government organizations or agencies around the world. While trust-based information access iswell studied in the literature, presented secure information sharing technologies and protocols cannotoffer enough incentives for government agencies to share information amid them without harming theirown national interest. To overcome the drawbacks of the exiting technology, an innovative and proficienttrust-based security protocol is proposed in this
A. Anil Sinaci
Full Text Available Postmarketing drug surveillance is a crucial aspect of the clinical research activities in pharmacovigilance and pharmacoepidemiology. Successful utilization of available Electronic Health Record (EHR data can complement and strengthen postmarketing safety studies. In terms of the secondary use of EHRs, access and analysis of patient data across different domains are a critical factor; we address this data interoperability problem between EHR systems and clinical research systems in this paper. We demonstrate that this problem can be solved in an upper level with the use of common data elements in a standardized fashion so that clinical researchers can work with different EHR systems independently of the underlying information model. Postmarketing Safety Study Tool lets the clinical researchers extract data from different EHR systems by designing data collection set schemas through common data elements. The tool interacts with a semantic metadata registry through IHE data element exchange profile. Postmarketing Safety Study Tool and its supporting components have been implemented and deployed on the central data warehouse of the Lombardy region, Italy, which contains anonymized records of about 16 million patients with over 10-year longitudinal data on average. Clinical researchers in Roche validate the tool with real life use cases.
Full Text Available Semantic search engines(SSE are more efficient than other web engines because in this era of busy life everyone wants an exact answer to his question which only semantic engines can provide. The immense increase in the volume of data, traditional search engines has increased the number of answers to satisfy the user. This creates the problem to search for the desired answer. To solve this problem, the trend of developing semantic search engines is increasing day by day. Semantic search engines work to extract the best answer of user queries which exactly fits with it. Traditional search engines are keyword based which means that they do not know the meaning of the words which we type in our queries. Due to this reason, the semantic search engines super pass the conventional search engines because they give us meaningful and well-defined information. In this paper, we will discuss the background of Semantic searching, about semantic search engines; the technology used for the semantic search engines and some of the existing semantic search engines on various factors are compared.
CHU Wang; QIAN Depei
This paper combines semantic web technology with business modeling and yields semantic business model that is semantically described in terms of roles and relationships. The semantic business model can be used to discover grid services by means of automation tools. The gap between business goals and grid services is bridged by role relationships and compositions of them, so that the virtual organization evolution is supported effectively. Semantic business model can support virtual organization validation at design stage rather than at run-time stage. The designers can animate their business model and make initial assessment of what interactions should occur between roles and in which order. The users can verify whether the grid service compositions satisfy business goals.
The Unified Medical Language System is an extensive source of biomedical knowledge developed and maintained by the US National Library of Medicine (NLM) and is being currently used in a wide variety of biomedical applications. The Semantic Network, a component of the UMLS is a structured description of core biomedical knowledge consisting of well defined semantic types and relationships between them. We investigate the expressiveness of DAML+OIL, a markup language proposed for ontologies on the Semantic Web, for representing the knowledge contained in the Semantic Network. Requirements specific to the Semantic Network, such as polymorphic relationships and blocking relationship inheritance are discussed and approaches to represent these in DAML+OIL are presented. Finally, conclusions are presented along with a discussion of ongoing and future work.
One of the principal scientific challenges that drives my group is to understand the character of formal knowledge on the Web.By formal knowledge,I mean information that is represented on the Web in something other than natural language text—typically,as machine-readable Web data with a formal syntax and a specific,intended semantics.The Web provides a major counterpoint to our traditional artificial intelligence (AI) based accounts of formal knowledge.Most symbolic AI systems are designed to address sophisticated logical inference over coherent conceptual knowledge,and thus the underlying research is focused on characterizing formal properties such as entailment relations,time/space complexity of inference,monotonicity,and expressiveness.In contrast,the Semantic Web allows us to explore formal knowledge in a very different context,where data representations exist in a constantly changing,large-scale,highly distributed network of looselyconnected publishers and consumers,and are governed by a Web-derived set of social practices for discovery,trust,reliability,and use.We are particularly interested in understanding how large-scale Semantic Web data behaves over longer time periods:the way by which its producers and consumers shift their requirements over time;how uniform resource identifiers (URIs) are used to dynamically link knowledge together;and the overall lifecycle of Web data from publication,to use,integration with other knowledge,evolution,and eventual deprecation.We believe that understanding formal knowledge in this Web context is the key to bringing existing AI insights and knowledge bases to the level of scale and utility of the current hypertext Web.
the information out of various types of EXIF digital camera files and show it in a reasonably consistent way (schema), 2003. http://www.w3.org/2000...many documents are not expressible in logica at all, and many in logic but not in N3. However, we are building a system for which a prime goal is the...demonstrate that conventional logica programming tools are efficent and straightforwradly adapted to semantic web work. • Jena RDF toolkit now accepts N3 as
Is meaningful communication possible between two intelligent parties who share no common language or background? In this work, a theoretical framework is proposed in which it is possible to address when and to what extent such semantic communication is possible: such problems can be rigorously addressed by explicitly focusing on the goals of the communication. Under this framework, it is possible to show that for many goals, communication without any common language or background is possible using universal protocols. This work should be accessible to anyone with an undergraduate-level knowled
Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.
In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a
Investigates conceptual barriers prevalent in the works of both proponents and opponents of semantic naturalism. Searches for a tenable definition of naturalism according to which one can be a realist, a non-reductionist, and a naturalist about semantic content. (Author/VWL)
Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.
Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…
Van Valin, Robert D., Jr.
The nature of semantic roles and grammatical relations are explored from the perspective of Role and Reference Grammar (RRG). It is proposed that unraveling the relational aspects of grammar involves the recognition that semantic roles fall into two types, thematic relations and macroroles, and that grammatical relations are not universal and are…
Lykke, Marianne; Dalbin, Sylvie; Smedt, Johan De;
ISO 25964-2:2013 is applicable to thesauri and other types of vocabulary that are commonly used for information retrieval. It describes, compares and contrasts the elements and features of these vocabularies that are implicated when interoperability is needed. It gives recommendations for the est...
Khadka, Ravi; Sapkota, Brahmananda; Ferreira Pires, Luis; Sinderen, van Marten; Jansen, Slinger; Sinderen, van Marten; Johnson, Pontus
Service-Oriented Architecture (SOA) has emerged as an architectural style to foster enterprise interoperability, as it claims to facilitate the flexible composition of loosely coupled enterprise applications and thus alleviates the heterogeneity problem among enterprises. Meanwhile, Model-Driven Arc
such assumptions were a reasonable approximation that seldom led to seri - ous problems. They also were of tremendous value in simplifying...traditional assumptions are inap - propriate and traditional integration methods inadequate. Interoperation in systems of systems encompasses human...the Edge of the Organiza- tion: Role as Praxis .” The International Society for the Psychoana- lytic Study of Organizations 2005 Symposium. Baltimore
J. K. Zhang
Full Text Available This paper describes the use of a new distributed middleware technology Web Services in the proposed Healthcare Information System (HIS to address the issue of system interoperability raised from existing Healthcare Information systems. With the development of HISs, hospitals and healthcare institutes have been building their own HISs for processing massive healthcare data, such as, systems built up for hospitals under the NHS (National Health Service to manage patients records. Nowadays many healthcare providers are willing to integrate their systems functions and data for information sharing. This has raised concerns in data transmission, data security and network limitation. Among these issues, system and language interoperability are one of most obvious issues since data and application integration is not an easy task due to differences in programming languages, system platforms, Database Management Systems (DBMS used within different systems. As a new distributed middleware technology, Web service brings an ideal solution to the issue of system and language interoperability. Web service has been approved to be very successful in many commercial applications (e.g. Amazon.com, Dell computer, etc., however it is different to healthcare information system. As the result, Web Service-based Integrated Healthcare Information System (WSIHIS is proposed to address the interoperability issue of existing HISs but also to introduce this new technology into the healthcare environment.
University Di Pisa Department Di Ingegneria Dell Informazione Elettronica, Informatica , Telecomunicazioni Via Girolamo Caruso 16 Pisa, Italy 56122...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University Di Pisa Department Di Ingegneria Dell Informazione Elettronica, Informatica ...DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE ELETTRONICA, INFORMATICA , TELECOMUNICAZIONI WAVEFORM DIVERSITY AND DESIGN FOR INTEROPERATING
Viewpoint," Wehrtechnik (Jul 1979): pp. 28-33. Esteve, Jean -Rene. French Armaments and Communications Interoperability Within NATO. U.S. Naval Postgraduate...Mr. Dooley LTC Frilak Mr. Gargaro LTC Linthwaite Mr. Rada MAJ Bottom Mr. Thompson U.S. Army Logistics Center LTC Johnson Mr. Wilson Mr. Bodin Mr. Wund
Ana M. Martínez Tamayo
Full Text Available La interoperabilidad entre distintos sistemas de organización del conocimiento (SOC ha cobrado gran importancia en los últimos tiempos, con el propósito de facilitar la búsqueda simultánea en varias bases de datos o bien fusionar distintas bases de datos en una sola. Las nuevas normas para el diseño y desarrollo de SOC, la estadounidense Z39.19:2005 y la británica BS 8723-4:2007, incluyen recomendaciones detalladas para la interoperabilidad. También se encuentra en preparación una nueva norma ISO 25964-1 sobre tesauros e interoperabilidad que se agregará a las anteriores. La tecnología disponible proporciona herramientas para este fin, como son los formatos y requisitos funcionales de autoridades y las herramientas de la Web Semántica RDF/OWL, SKOS Core y XML. Por otro lado, actualmente es muy difícil diseñar y desarrollar nuevos SOC debido a los problemas económicos, de modo que la interoperabilidad hace posible aprovechar los SOC existentes. En este trabajo se revisan los conceptos básicos, los modelos y métodos recomendados por las normas, así como numerosas experiencias de interoperabilidad entre SOC que han sido documentadas.The interoperability between knowledge organization systems (KOS has become very important in recent years, in order to facilitate simultaneous searches in several databases or to merge different databases into one. The new standards for KOS design and development, the American Z39.19:2005 and the British 8723-4:2007, include detailed recommendations for interoperability. Also, there is a new ISO standard in preparation, the 25964-1 about thesauri and interoperability, which will be added to the above mentioned ones. The available technology provides tools for interoperability, e.g. formats and functional requirements for subject authority, as well as those for Semantic Web RDF/OWL, SKOS Core and XML. On the other hand, presently it is very hard to design and develop new KOS due to economical problems
Oshaiba, Mohamed Marouf Z; El Houby, Enas M F; Salah, Akram
Online literatures are increasing in a tremendous rate. Biological domain is one of the fast growing domains. Biological researchers face a problem finding what they are searching for effectively and efficiently. The aim of this research is to find documents that contain any combination of biological process and/or molecular function and/or cellular component. This research proposes a framework that helps researchers to retrieve meaningful documents related to their asserted terms based on gene ontology (GO). The system utilizes GO by semantically decomposing it into three subontologies (cellular component, biological process, and molecular function). Researcher has the flexibility to choose searching terms from any combination of the three subontologies. Document annotation is taking a place in this research to create an index of biological terms in documents to speed the searching process. Query expansion is used to infer semantically related terms to asserted terms. It increases the search meaningful results using the term synonyms and term relationships. The system uses a ranking method to order the retrieved documents based on the ranking weights. The proposed system achieves researchers' needs to find documents that fit the asserted terms semantically.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
Santoro, Mattia; Papeschi, Fabrizio; Craglia, Massimo; Nativi, Stefano
Even with the use of common data models standards to publish and share geospatial data, users may still face semantic inconsistencies when they use Spatial Data Infrastructures - especially in multidisciplinary contexts. Several semantic mediation solutions exist to address this issue; they span from simple XSLT documents to transform from one data model schema to another, to more complex services based on the use of ontologies. This work presents the activity done in the context of the OGC Web Services Phase 9 (OWS-9) Cross Community Interoperability to develop a semantic mediation solution by enhancing the GEOSS Discovery and Access Broker (DAB). This is a middleware component that provides harmonized access to geospatial datasets according to client applications preferred service interface (Nativi et al. 2012, Vaccari et al. 2012). Given a set of remote feature data encoded in different feature schemas, the objective of the activity was to use the DAB to enable client applications to transparently access the feature data according to one single schema. Due to the flexible architecture of the Access Broker, it was possible to introduce a new transformation type in the configured chain of transformations. In fact, the Access Broker already provided the following transformations: Coordinate Reference System (CRS), spatial resolution, spatial extent (e.g., a subset of a data set), and data encoding format. A new software module was developed to invoke the needed external semantic mediation service and harmonize the accessed features. In OWS-9 the Access Broker invokes a SPARQL WPS to retrieve mapping rules for the OWS-9 schemas: USGS, and NGA schema. The solution implemented to address this problem shows the flexibility and extensibility of the brokering framework underpinning the GEO DAB: new services can be added to augment the number of supported schemas without the need to modify other components and/or software modules. Moreover, all other transformations (CRS
Holter, Gregory M.
The National Counterdrug Center (NCC) was initially authorized by Congress in FY 1999 appropriations to create a simulation-based counterdrug interoperability training capability. As the lead organization for Research and Analysis to support the NCC, the Pacific Northwest National Laboratory (PNNL) was responsible for developing the requirements for this interoperability simulation capability. These requirements were structured to address the hardware and software components of the system, as well as the deployment and use of the system. The original set of requirements was developed through a process of conducting a user-based survey of requirements for the simulation capability, coupled with an analysis of similar development efforts. The user-based approach ensured that existing concerns with respect to interoperability within the law enforcement community would be addressed. Law enforcement agencies within the designated pilot area of Cochise County, Arizona, were surveyed using interviews and ride-alongs during actual operations. The results of this survey were then accumulated, organized, and validated with the agencies to ensure the accuracy of the results. These requirements were then supplemented by adapting operational requirements from existing systems to ensure system reliability and operability. The NCC adopted a development approach providing incremental capability through the fielding of a phased series of progressively more capable versions of the system. This allowed for feedback from system users to be incorporated into subsequent revisions of the system requirements, and also allowed the addition of new elements as needed to adapt the system to broader geographic and geopolitical areas, including areas along the southwest and northwest U.S. borders. This paper addresses the processes used to develop and refine requirements for the NCC interoperability simulation capability, as well as the response of the law enforcement community to the use of
Both the choice made by the observer and consciousness are discussed in terms of cyclical time. That is, while the process of classical choice evolves forward in time, the quantum reference frame evolves backward in time to equate itself with the classical choice made by the observer, such that at the end, this corresponds to the case of self-observation in consciousness in linear time. This indicates that discrete and finite information is accompanied by a continuous or infinite "semantic" quantum part. In particular, the continuous semantic aspect is considered to be related to universal grammar, a suggested innate structure in languages. This paper also argues that the cyclical time model can be considered to have both small and large cycles and will also argue that at the most basic level, consciousness is strongly connected to time. This means that another aspect is added, that is, a more detailed description of the ongoing proposal of the subjective model, in which the classical is just as fundamental a...
Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.
Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…
Sinderen, van Marten; Johnson, Pontus; Xu, Xiaofei; Doumeingts, Guy
This year’s IWEI – IWEI 2012 – was held during September 6–7, 2012, in Harbin, China, following previous events in Stockholm, Sweden (2011), Valencia, Spain (2009), and Munich, Germany (2008). The theme of IWEI 2012 was “Collaboration, Interoperability and Services for Networked Enterprises,” thus e
The scope of the research presented includes semantic-based integration of data services in smart grids achieved through following the proposed (S²)In-approach developed corresponding to design science guidelines. This approach identifies standards and specifications, which are integrated in order to build the basis for the (S²)In-architecture. A process model is introduced in the beginning, which serves as framework for developing the target architecture. The first step of the process stipulates to define requirements for smart grid ICT-architectures being derived from established studies and
In this paper a semantics for dynamic predicate logic is developed that uses sequence valued assignments. This semantics is compared with the usual relational semantics for dynamic predicate logic: it is shown that the most important intuitions of the usual semantics are preserved. Then it is shown
D D Dhobale
Full Text Available Large amounts of spatial data are becoming available today due to the rapid development of remote sensing techniques. Several retrieval systems are proposed to retrieve necessary, interested and effective information such as key- word based image retrieval and content based image retrieval. However, the results of these approaches are generally unsatisfactory, unpredictable and do not match human perception due to the well gap between visual features and semantic concepts. In this paper, we propose a new approach allowing semantic satellite image retrieval, describing the semantic image content and managing uncertain information. It is based on ontology model which represents spatial knowledge in order to provide semantic understanding of image content. Our retrieval system is based on two modules: ontological model merging and semantic strategic image retrieval. The first module allows developing ontological models which represent spatial knowledge of the satellite image, and managing uncertain information. The second module allows retrieving satellite images basing on their ontological model. In order to improve the quality of retrieval system and to facilitate the retrieval process, we propose two retrieval strategies which are the opportunist strategy and the hypothetic strategy. Our approach attempts to improve the quality of image retrieval, to reduce the semantic gap between visual features and semantic concepts and to provide an automatic solution for efficient satellite image retrieval.
The word ‘clerk’ has experienced an evolution from ‘clergyman’ to ‘assistant’, during which its‘cleric’meaning is gradual y lost. The paper mainly focuses on the semantic changes. It achieves some discoveries of the problem in the historical event, the Reformation. Besides, it has also explored dif erent meanings of ‘clerk’ in various stages.
Mir, Masood Saleem
The interdisciplinary nature of "Systems Engineering" (SE), having "stakeholders" from diverse domains with orthogonal facets, and need to consider all stages of "lifecycle" of system during conception, can benefit tremendously by employing "Knowledge Engineering" (KE) to achieve semantic agreement among all…
This book introduces a novel approach for intelligent visualizations that adapts the different visual variables and data processing to human’s behavior and given tasks. Thereby a number of new algorithms and methods are introduced to satisfy the human need of information and knowledge and enable a usable and attractive way of information acquisition. Each method and algorithm is illustrated in a replicable way to enable the reproduction of the entire “SemaVis” system or parts of it. The introduced evaluation is scientifically well-designed and performed with more than enough participants to validate the benefits of the methods. Beside the introduced new approaches and algorithms, readers may find a sophisticated literature review in Information Visualization and Visual Analytics, Semantics and information extraction, and intelligent and adaptive systems. This book is based on an awarded and distinguished doctoral thesis in computer science.
Syeda Farha Shazmeen, Etyala Ramyasree
Full Text Available Semantic Web Mining aims at combining the two areas Semantic Web and Web Mining by using semantics to improve mining and using mining to create semantics. Web Mining aims at discovering insights about the meaning of Web resources and their usage In Semantic Web, the semantics information is presented by the relation with others and is recorded by RDF. RDF which is semantic web technology that can be utilized to build efficient and scalable systems for Cloud. The Semantic Web enriches the World Wide Web by machine process able information which supports the user in his tasks, and also helps the users to get the exact search result .In this paper; we discuss the interplay of the Semantic Web with Web Mining, list out the benefits. Challenges, opportunities of the Semantic web are discussed.
Nielson, Flemming; Nielson, Hanne Riis
In principle termination analysis is easy: find a well-founded ordering and prove that calls decrease with respect to the ordering. We show how to embed termination information into a polymorphic type system for an eager higher-order functional language allowing multiple-argument functions...... and algebraic data types. The well-founded orderings are defined by pattern matching against the definition of the algebraic data types. We prove that the analysis is semantically sound with respect to a big-step (or natural) operational semantics. We compare our approach based on operational semantics to one...
Madlener, Ken; van Eekelen, Marko; 10.4204/EPTCS.62.2
One of the proposed solutions for improving the scalability of semantics of programming languages is Component-Based Semantics, introduced by Peter D. Mosses. It is expected that this framework can also be used effectively for modular meta theoretic reasoning. This paper presents a formalization of Component-Based Semantics in the theorem prover Coq. It is based on Modular SOS, a variant of SOS, and makes essential use of dependent types, while profiting from type classes. This formalization constitutes a contribution towards modular meta theoretic formalizations in theorem provers. As a small example, a modular proof of determinism of a mini-language is developed.
Hebert, M.; Bagnell, J. A.; Bajracharya, M.; Daniilidis, K.; Matthies, L. H.; Mianzo, L.; Navarro-Serment, L.; Shi, J.; Wellfare, M.
Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.
The principal idea behind this book is that lexis and grammar make up a single coherent structure. It is shown that the grammatical patterns of the different classes of Russian nominals are closely interconnected. They can be described as reflecting a limited set of semantic distinctions which...... are also rooted in the lexical-semantic classification of Russian nouns. The presentation focuses on semantics, both lexical and grammatical, and not least the connection between these two levels of content. The principal theoretical impact is the insight that grammar and lexis should not be seen...
A comprehensive and extensive review of state-of-the-art in semantics acquisition game (SAG) design A set of design patterns for SAG designers A set of case studies (real SAG projects) demonstrating the use of SAG design patterns
Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of
Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana
perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.
Song, Dezhao; Chute, Christopher G; Tao, Cui
To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. ...
ZHANG Hui; SONG Hantao; XU Xiaomei
A semantic session analysis method partitioning Web usage logs is presented. Semantic Web usage log preparation model enhances usage logs with semantic. The Markov chain model based on ontology semantic measurement is used to identifying which active session a request should belong to. The competitive method is applied to determine the end of the sessions.Compared with other algorithms, more successful sessions are additionally detected by semantic outlier analysis.
In this book, we detail different theories, methods and implementations combining Web 2.0 paradigms and Semantic Web technologies in Enterprise environments. After introducing those terms, we present the current shortcomings of tools such as blogs and wikis as well as tagging practices in an Enterprise 2.0 context. We define the SemSLATES methodology and the global vision of a middleware architecture based on Semantic Web technologies and Linked Data principles (languages, models, tools and protocols) to solve these issues. Then, we detail the various ontologies that we build to achieve this g
Full Text Available Abstract Background Semantic relations increasingly underpin biomedical text mining and knowledge discovery applications. The success of such practical applications crucially depends on the quality of extracted relations, which can be assessed against a gold standard reference. Most such references in biomedical text mining focus on narrow subdomains and adopt different semantic representations, rendering them difficult to use for benchmarking independently developed relation extraction systems. In this article, we present a multi-phase gold standard annotation study, in which we annotated 500 sentences randomly selected from MEDLINE abstracts on a wide range of biomedical topics with 1371 semantic predications. The UMLS Metathesaurus served as the main source for conceptual information and the UMLS Semantic Network for relational information. We measured interannotator agreement and analyzed the annotations closely to identify some of the challenges in annotating biomedical text with relations based on an ontology or a terminology. Results We obtain fair to moderate interannotator agreement in the practice phase (0.378-0.475. With improved guidelines and additional semantic equivalence criteria, the agreement increases by 12% (0.415 to 0.536 in the main annotation phase. In addition, we find that agreement increases to 0.688 when the agreement calculation is limited to those predications that are based only on the explicitly provided UMLS concepts and relations. Conclusions While interannotator agreement in the practice phase confirms that conceptual annotation is a challenging task, the increasing agreement in the main annotation phase points out that an acceptable level of agreement can be achieved in multiple iterations, by setting stricter guidelines and establishing semantic equivalence criteria. Mapping text to ontological concepts emerges as the main challenge in conceptual annotation. Annotating predications involving biomolecular
SHAN Baoci; ZHANG Wutian; MA Lin; LI Dejun; CAO Bingli; TANG Yiyuan; WU Yigen; TANG Xiaowei
This study has identified the active cerebral areas of normal Chinese that are associated with Chinese semantic processing using functional brain imaging. According to the traditional cognitive theory, semantic processing is not particularly associated with or affected by input modality. The functional brain imaging experiments were conducted to identify the common active areas of two modalities when subjects perform Chinese semantic tasks through reading and listening respectively. The result has shown that the common active areas include left inferior frontal gyrus (BA 44/45), left posterior inferior temporal gyrus (BA37); the joint area of inferior parietal lobules (BA40) and superior temporal gyrus, the ventral occipital areas and cerebella of both hemispheres. It gives important clue to further discerning the roles of different cerebral areas in Chinese semantic processing.
Frank, A I
The paper presents a constraint based semantic formalism for HPSG. The advantages of the formlism are shown with respect to a grammar for a fragment of German that deals with (i) quantifier scope ambiguities triggered by scrambling and/or movement and (ii) ambiguities that arise from the collective/distributive distinction of plural NPs. The syntax-semantics interface directly implements syntactic conditions on quantifier scoping and distributivity. The construction of semantic representations is guided by general principles governing the interaction between syntax and semantics. Each of these principles acts as a constraint to narrow down the set of possible interpretations of a sentence. Meanings of ambiguous sentences are represented by single partial representations (so-called U(nderspecified) D(iscourse) R(epresentation) S(tructure)s) to which further constraints can be added monotonically to gain more information about the content of a sentence. There is no need to build up a large number of alternative...
U.S. Department of Health & Human Services — The SKR Project was initiated at NLM in order to develop programs to provide usable semantic representation of biomedical free text by building on resources...
Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)
Real-Time Blue Button for Patients and Families ,” that streams physiological data (including waveforms) from medical devices connected at our lab in...Interoperability Showcase at HIMSS14, we demonstrated a new ICE app, “Real-Time Blue Button for Patients and Families ,” that streams physiological data (including...waveforms) from medical devices connected at our lab in Cambridge, MA, as well as data from medical devices connected locally. At the SmartAmerica
Müller, Wilmuth; Marques, Hugo; Pereira, Luis; Rodriguez, Jonathan; Brouwer, Frank; Bouwers, Bert; Politis, Ilias; Lykourgiotis, Asimakis; Ladas, Alexandros; Adigun, Olayinka; Jelenc, David
The growing number of events affecting public safety and security (PS&S) on a regional scale with potential to grow up to large scale cross border disasters puts an increased pressure on agencies and organisation responsible for PS&S. In order to respond timely and in an adequate manner to such events, Public Protection and Disaster Relief (PPDR) organisations need to cooperate, align their procedures and activities, share the needed information and be interoperable. Existing PPDR/PMR technologies such as TETRA, TETRAPOL or P25, do not currently provide broadband capability nor is expected such technologies to be upgraded in the future. This presents a major limitation in supporting new services and information flows. Furthermore, there is no known standard that addresses interoperability of these technologies. In this contribution the design of a next generation communication infrastructure for PPDR organisations which fulfills the requirements of secure and seamless end-to-end communication and interoperable information exchange within the deployed communication networks is presented. Based on Enterprise Architecture of PPDR organisations, a next generation PPDR network that is backward compatible with legacy communication technologies is designed and implemented, capable of providing security, privacy, seamless mobility, QoS and reliability support for mission-critical Private Mobile Radio (PMR) voice and broadband data services. The designed solution provides a robust, reliable, and secure mobile broadband communications system for a wide variety of PMR applications and services on PPDR broadband networks, including the ability of inter-system, interagency and cross-border operations with emphasis on interoperability between users in PMR and LTE.
Wende, Kristin; Legner, Christine
In recent years, the established roles in the automotive industry have undergone changes: Automakers which have traditionally executed control over the entire value chain are now increasingly focusing on branding and distribution. At the same time, tier-1 suppliers are becoming vehicle integrators. This paper analyses how new forms of cooperation impact the required level of business interoperability. The comparison of two cases, a traditional OEM-supplier relationship and an innovative form ...
Subhasis Ray; Bhalla, Upinder S.
Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version ...
Conroy, Mike; Gill, Paul; Hill, Bradley; Ibach, Brandon; Jones, Corey; Ungar, David; Barch, Jeffrey; Ingalls, John; Jacoby, Joseph; Manning, Josh; Bengtsson, Kjell; Falls, Mark; Kent, Peter; Heath, Shaun; Kennedy, Steven
The TDI project (TDI) investigates trending technical data standards for applicability to NASA vehicles, space stations, payloads, facilities, and equipment. TDI tested COTS software compatible with a certain suite of related industry standards for capabilities of individual benefits and interoperability. These standards not only esnable Information Technology (IT) efficiencies, but also address efficient structures and standard content for business processes. We used source data from generic industry samples as well as NASA and European Space Agency (ESA) data from space systems.
Engeser, Stefan; Baumann, Nicola; Baum, Ingrid
Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications.
Historical linguistics is traditionally concerned with phonology and syntax. With the exception of grammaticalization - the development of auxiliary verbs, the syntactic rather than localistic use of prepositions, etc. - semantic change has usually not been described as a result of regular deve...... developments, but only as specific meaning changes in individual words. This paper will suggest some regularities in semantic change, regularities which, like sound laws, have predictive power and can be tested against recorded languages....
Hart, John; Anand, Raksha; Zoccoli, Sandra; Maguire, Mandy; Gamino, Jacque; Tillman, Gail; King, Richard; Kraut, Michael A
Semantic memory is described as the storage of knowledge, concepts, and information that is common and relatively consistent across individuals (e.g., memory of what is a cup). These memories are stored in multiple sensorimotor modalities and cognitive systems throughout the brain (e.g., how a cup is held and manipulated, the texture of a cup's surface, its shape, its function, that is related to beverages such as coffee, and so on). Our ability to engage in purposeful interactions with our environment is dependent on the ability to understand the meaning and significance of the objects and actions around us that are stored in semantic memory. Theories of the neural basis of the semantic memory of objects have produced sophisticated models that have incorporated to varying degrees the results of cognitive and neural investigations. The models are grouped into those that are (1) cognitive models, where the neural data are used to reveal dissociations in semantic memory after a brain lesion occurs; (2) models that incorporate both cognitive and neuroanatomical information; and (3) models that use cognitive, neuroanatomic, and neurophysiological data. This review highlights the advances and issues that have emerged from these models and points to future directions that provide opportunities to extend these models. The models of object memory generally describe how category and/or feature representations encode for object memory, and the semantic operations engaged in object processing. The incorporation of data derived from multiple modalities of investigation can lead to detailed neural specifications of semantic memory organization. The addition of neurophysiological data can potentially provide further elaboration of models to include semantic neural mechanisms. Future directions should incorporate available and newly developed techniques to better inform the neural underpinning of semantic memory models.
Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen
Morales, Aythami; González, Ester; Ferrer, Miguel A.
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714
Morales, Aythami; González, Ester; Ferrer, Miguel A
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.
Pape-Haugaard, Louise; Frank, Lars
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Heinecke, J; Heinecke, Johannes; Worm, Karsten L.
This paper describes the development and use of a lexical semantic database for the Verbmobil speech-to-speech machine translation system. The motivation is to provide a common information source for the distributed development of the semantics, transfer and semantic evaluation modules and to store lexical semantic information application-independently. The database is organized around a set of abstract semantic classes and has been used to define the semantic contributions of the lemmata in the vocabulary of the system, to automatically create semantic lexica and to check the correctness of the semantic representations built up. The semantic classes are modelled using an inheritance hierarchy. The database is implemented using the lexicon formalism LeX4 developed during the project.
David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.
The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the
The distribution semantics is one of the most prominent approaches for the combination of logic programming and probability theory. Many languages follow this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs) and ProbLog. When a program contains functions symbols, the distribution semantics is well-defined only if the set of explanations for a query is finite and so is each explanation. Well-definedness is usually either explicitly imposed or is achieved by severely limiting the class of allowed programs. In this paper we identify a larger class of programs for which the semantics is well-defined together with an efficient procedure for computing the probability of queries. Since LPADs offer the most general syntax, we present our results for them, but our results are applicable to all languages under the distribution semantics. We present the algorithm "Probabilistic Inference with Tabling and Answer subsumption" (PITA) that computes the probability of...
Nguyen, Loc H
The sharing of data, particularly health data, has been an important tool for the public health community, especially in terms of data sharing across systems (i.e., interoperability). Child maltreatment is a serious public health issue that could be better mitigated if there were interoperability. There are challenges to addressing child maltreatment interoperability that include the current lack of data sharing among systems, the lack of laws that promote interoperability to address child maltreatment, and the lack of data sharing at the individual level. There are waivers in federal law that allow for interoperability to prevent communicable diseases at the individual level. Child maltreatment has a greater long-term impact than a number of communicable diseases combined, and interoperability should be leveraged to maximize public health strategies to prevent child maltreatment.
Cerón, Jesús D; Gómez, Guillermo A; López, Diego M; González, Carolina; Blobel, Bernd
A Personal Health Record (PHR) is a health information repository controlled and managed directly by a patient or his/her custodian, or a person interested in his/her own health. PHR System's adoption and compliance with international standards is foremost important because it can help to meet international, national, regional or institutional interoperability and portability policies. In this paper, an interoperable PHR System for supporting the control of type 2 diabetes mellitus is proposed, which meets the mandatory interoperability requirements proposed in the Personal Health Record System Functional Model standard (ISO 16527). After performing a detailed analysis of different applications and platforms for the implementation of electronic Personal Health Records, the adaptation of the Indivo Health open source platform was completed. Interoperability functions were added to this platform by integrating the Mirth Connect platform. The assessment of the platform's interoperability capabilities was carried out by a group of experts, who verified the interoperability requirements proposed in the ISO 16527 standard.
Bui, Duc Viet; Iacob, Maria-Eugenia; Sinderen, van Marten; Zarghami, Alireza
In elderly care the shortage of available financial and human resources for coping with an increasing number of elderly people becomes critical. Current solutions to this problem focus on efficiency gains through the usage of information systems and include homecare services provided by IT systems.
Johnson, Matthew; Brostow, G. J.; Shotton, J.; Kwatra, V.; Cipolla, R.
Composite images are synthesized from existing photographs by artists who make concept art, e.g. storyboards for movies or architectural planning. Current techniques allow an artist to fabricate such an image by digitally splicing parts of stock photographs. While these images serve mainly to "quickly" convey how a scene should look, their production is laborious. We propose a technique that allows a person to design a new photograph with substantially less effort. This paper presents a method that generates a composite image when a user types in nouns, such as "boat" and "sand." The artist can optionally design an intended image by specifying other constraints. Our algorithm formulates the constraints as queries to search an automatically annotated image database. The desired photograph, not a collage, is then synthesized using graph-cut optimization, optionally allowing for further user interaction to edit or choose among alternative generated photos. Our results demonstrate our contributions of (1) a method of creating specific images with minimal human effort, and (2) a combined algorithm for automatically building an image library with semantic annotations from any photo collection.
Wang, Xiaogang; Qiu, Shi; Liu, Ke; Tang, Xiaoou
Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images are first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images' semantic meanings which interpret users' search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods.
Oliveira, Manuel Au-Yong; Ferreira, João José Pinto
We intend to use multiple case studies to develop a theoretical model concerning the contemporary phenomenon of organizational innovativeness and its link to interoperability. We are interested in particular in interoperability as pertaining to people and organizations able to operate in conjunction (together) to produce innovation. Interoperability can be defined as “the ability of a system or an organization to work seamless[ly] with other systems or organization[s] without any special effo...
Kishor, P.; Peckham, S. D.; Gower, S. T.; Batzli, S.
Large-scale terrestrial ecosystem modeling is highly parameterized, and requires lots of historical data. Routine model runs can easily utlize hundreds of Gigabytes, even Terabytes of data on tens, perhaps hundreds of parameters. It is a given that no one modeler can or does collect all the required data. All modelers depend upon other scientists, and governmental and research agencies for their data needs. This is where data accessibility and interoperability become crucial for the success of the project. Having well-documented and quality data available in a timely fashion can greatly assist a project's progress, while the converse can bring the project to a standstill, leading to a large amount of wasted staff time and resources. Data accessibility is a complex issue -- at best, it is an unscientific composite of a variety of factors: technological, legal, cultural, semantic, and economic. In reality, it is a concept that most scientists only worry about when they need some data, and mostly never after their project is complete. The exigencies of the vetting, review and publishing processes overtake the long-term view of making one's own data available to others with the same ease and openness that was desired when seeking data from others. This presentation describes our experience with acquiring data for our carbon modeling efforts, dealing with federal, state and local agencies, variety of data formats, some published, some not so easy to find, and documentation that ranges from excellent to non-existent. A set of indicators are proposed to place and determine the accessibility of scientific data -- those we are seeking and those we are producing -- in order to bring some transparency and clarity that can make data acquisition and sharing easier. The paper concludes with a proposal to utilize a free, open and well-recognized data marks such as CC0 (CC-Zero), Public Domain Dedication License, and CC-BY created by Creative Commons that would advertize the
Bin Xu; Po Zhang; Juan-Zi Li; Wen-Jun Yang
This paper is concerned with the matchmaker for ranking web services by using semantics. So far several methods of semantic matchmaker have been proposed. Most of them, however, focus on classifying the services into predefined categories rather than providing a ranking result. In this paper, a new method of semantic matchmaker is proposed for ranking web services. It is proposed to use the semantic distance for estimating the matching degree between a service and a user request. Four types of semantic distances are defined and four algorithms are implemented respectively to calculate them. Experimental results show that the proposed semantic matchmaker significantly outperforms the keywordbased baseline method.
The book is composed of two main parts. The first part is a general study of Semantic Web Search. The second part specifically focuses on the use of semantics throughout the search process, compiling a big picture of Process-oriented Semantic Web Search from different pieces of work that target specific aspects of the process.In particular, this book provides a rigorous account of the concepts and technologies proposed for searching resources and semantic data on the Semantic Web. To collate the various approaches and to better understand what the notion of Semantic Web Search entails, this bo
This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998.......This is version 1.0 of the CASL Language Summary, annotated by the CoFI Semantics Task Group with the semantics of constructs. This is the first complete but possibly imperfect version of the semantics. It was compiled prior to the CoFI workshop at Cachan in November 1998....
We describe several views of the semantics of a simple programming language as formal documents in the calculus of inductive constructions that can be verified by the Coq proof system. Covered aspects are natural semantics, denotational semantics, axiomatic semantics, and abstract interpretation. Descriptions as recursive functions are also provided whenever suitable, thus yielding a a verification condition generator and a static analyser that can be run inside the theorem prover for use in reflective proofs. Extraction of an interpreter from the denotational semantics is also described. All different aspects are formally proved sound with respect to the natural semantics specification.
... interoperability in interstate transmission of electric power, and regional and wholesale electricity markets.'' \\3... electricity transmission and distribution system and directs the development of a framework to...
Full Text Available A mashup is a combination of information from more than one source, mixed up in a way to create something new, or at least useful. Anyone can find mashups on the internet, but these are always specifically designed for a predefined purpose. To change that fact, we implemented a new platform we called the SMART platform. SMART enables the user to make his own choices as for the REST web services he needs to call in order to build an intelligent personalized mashup, from a Google-like simple search interface, without needing any programming skills. In order to achieve this goal, we defined an ontology that can hold REST web services descriptions. These descriptions encapsulate mainly, the input type needed for a service, its output type, and the kind of relation that ties the input to the output. Then, by matching the user input query keywords, with the REST web services definitions in our ontology, we can find registered services individuals in this ontology, and construct the raw REST query for each service found. The wrap up from the keywords, into semantic definitions, in order to find the matching service individual, then the wrap down from the semantic service description of the found individual, to the raw REST call, and finally the wrap up of the result again into semantic individuals, is done for two main purposes: the first to let the user use simple keywords in order to build complex mashups, and the second to benefit from the ontology’s inference engine in a way, where services instances can be tied together into an intelligent mashup, simply by making each service output individuals, stand as the next service input.
Pomi, Andrés; Mizraji, Eduardo
Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.
Bug, William; Astahkov, Vadim; Boline, Jyl; Fennema-Notestine, Christine; Grethe, Jeffrey S; Gupta, Amarnath; Kennedy, David N; Rubin, Daniel L; Sanders, Brian; Turner, Jessica A; Martone, Maryann E
The broadly defined mission of the Biomedical Informatics Research Network (BIRN, www.nbirn.net) is to better understand the causes human disease and the specific ways in which animal models inform that understanding. To construct the community-wide infrastructure for gathering, organizing and managing this knowledge, BIRN is developing a federated architecture for linking multiple databases across sites contributing data and knowledge. Navigating across these distributed data sources requires a shared semantic scheme and supporting software framework to actively link the disparate repositories. At the core of this knowledge organization is BIRNLex, a formally-represented ontology facilitating data exchange. Source curators enable database interoperability by mapping their schema and data to BIRNLex semantic classes thereby providing a means to cast BIRNLex-based queries against specific data sources in the federation. We will illustrate use of the source registration, term mapping, and query tools.
Rose, Kristoffer Høgsbro
Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic ......Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic ...
Full Text Available The explosive growth in the size and use of the World Wide Web continuously creates new great challenges and needs. The need for predicting the users preferences in order to expedite and improve the browsing though a site can be achieved through personalizing of the websites. Most of the research efforts in web personalization correspond to the evolution of extensive research in web usage mining, i.e. the exploitation of the navigational patterns of the web site visitors. When a personalization system relies solely on usage-based results, however, valuable information conceptually related to what is finally recommended may be missed. Moreover, the structural properties of the web site are often disregarded. In this paper, we propose novel techniques that use the content semantics and the structural properties of a web site in order to improve the effectiveness of web personalization. In the first part of our work we present standing for Semantic Web Personalization, a personalization system that integrates usage data with content semantics, expressed in ontology terms, in order to compute semantically enhanced navigational patterns and effectively generate useful recommendations. To the best of our knowledge, our proposed technique is the only semantic web personalization system that may be used by non-semantic web sites. In the second part of our work, we present a novel approach for enhancing the quality of recommendations based on the underlying structure of a web site. We introduce UPR (Usage-based PageRank, a PageRank-style algorithm that relies on the recorded usage data and link analysis techniques. Overall, we demonstrate that our proposed hybrid personalization framework results in more objective and representative predictions than existing techniques.
Helena F Deus
Full Text Available BACKGROUND: Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. METHODOLOGY/PRINCIPAL FINDINGS: The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MD Anderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. CONCLUSIONS/SIGNIFICANCE: The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis.
Santoro, M.; Mazzetti, P.; Fugazza, C.; Nativi, S.; Craglia, M.
One of the main challenges in Earth Science Informatics is to build interoperability frameworks which allow users to discover, evaluate, and use information from different scientific domains. This needs to address multidisciplinary interoperability challenges concerning both technological and scientific aspects. From the technological point of view, it is necessary to provide a set of special interoperability arrangement in order to develop flexible frameworks that allow a variety of loosely-coupled services to interact with each other. From a scientific point of view, it is necessary to document clearly the theoretical and methodological assumptions underpinning applications in different scientific domains, and develop cross-domain ontologies to facilitate interdisciplinary dialogue and understanding. In this presentation we discuss a brokering approach that extends the traditional Service Oriented Architecture (SOA) adopted by most Spatial Data Infrastructures (SDIs) to provide the necessary special interoperability arrangements. In the EC-funded EuroGEOSS (A European approach to GEOSS) project, we distinguish among three possible functional brokering components: discovery, access and semantics brokers. This presentation focuses on the semantics broker, the Discovery Augmentation Component (DAC), which was specifically developed to address the three thematic areas covered by the EuroGEOSS project: biodiversity, forestry and drought. The EuroGEOSS DAC federates both semantics (e.g. SKOS repositories) and ISO-compliant geospatial catalog services. The DAC can be queried using common geospatial constraints (i.e. what, where, when, etc.). Two different augmented discovery styles are supported: a) automatic query expansion; b) user assisted query expansion. In the first case, the main discovery steps are: i. the query keywords (the what constraint) are “expanded” with related concepts/terms retrieved from the set of federated semantic services. A default expansion
I explore some of the issues that arise when trying to establish a connection between the underspecification hypothesis pursued in the NLP literature and work on ambiguity in semantics and in the psychological literature. A theory of underspecification is developed `from the first principles', i.e., starting from a definition of what it means for a sentence to be semantically ambiguous and from what we know about the way humans deal with ambiguity. An underspecified language is specified as the translation language of a grammar covering sentences that display three classes of semantic ambiguity: lexical ambiguity, scopal ambiguity, and referential ambiguity. The expressions of this language denote sets of senses. A formalization of defeasible reasoning with underspecified representations is presented, based on Default Logic. Some issues to be confronted by such a formalization are discussed.
Nieves, Juan Carlos; Cortés, Ulises
In this paper, a possibilistic disjunctive logic programming approach for modeling uncertain, incomplete and inconsistent information is defined. This approach introduces the use of possibilistic disjunctive clauses which are able to capture incomplete information and incomplete states of a knowledge base at the same time. By considering a possibilistic logic program as a possibilistic logic theory, a construction of a possibilistic logic programming semantic based on answer sets and the proof theory of possibilistic logic is defined. It shows that this possibilistic semantics for disjunctive logic programs can be characterized by a fixed-point operator. It is also shown that the suggested possibilistic semantics can be computed by a resolution algorithm and the consideration of optimal refutations from a possibilistic logic theory. In order to manage inconsistent possibilistic logic programs, a preference criterion between inconsistent possibilistic models is defined; in addition, the approach of cuts for re...
Springer, Anne; Prinz, Wolfgang
Previous studies have demonstrated that action prediction involves an internal action simulation that runs time-locked to the real action. The present study replicates and extends these findings by indicating a real-time simulation process (Graf et al., 2007), which can be differentiated from a similarity-based evaluation of internal action representations. Moreover, results showed that action semantics modulate action prediction accuracy. The semantic effect was specified by the processing of action verbs and concrete nouns (Experiment 1) and, more specifically, by the dynamics described by action verbs (Experiment 2) and the speed described by the verbs (e.g., "to catch" vs. "to grasp" vs. "to stretch"; Experiment 3). These results propose a linkage between action simulation and action semantics as two yet unrelated domains, a view that coincides with a recent notion of a close link between motor processes and the understanding of action language.
Albertazzi, Liliana; Canal, Luisa; Dadam, James; Micciolo, Rocco
This study analyses how certain qualitative perceptual appearances of biological forms are correlated with expressions of natural language. Making use of the Osgood semantic differential, we presented the subjects with 32 drawings of biological forms and a list of 10 pairs of connotative adjectives to be put in correlations with them merely by subjective judgments. The principal components analysis made it possible to group the semantics of forms according to two distinct axes of variability: harmony and dynamicity. Specifically, the nonspiculed, nonholed, and flat forms were perceived as harmonic and static; the rounded ones were harmonic and dynamic. The elongated forms were somewhat disharmonious and somewhat static. The results suggest the existence in the general population of a correspondence between perceptual and semantic processes, and of a nonsymbolic relation between visual forms and their adjectival expressions in natural language.
Hare, T. M.; Gaddis, L. R.
For nearly a decade there has been a push in the planetary science community to support interoperable methods of accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (i.e., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized image formats that retain geographic information (e.g., GeoTiff, GeoJpeg2000), digital geologic mapping conventions, planetary extensions for symbols that comply with U.S. Federal Geographic Data Committee cartographic and geospatial metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they have been modified to support the planetary domain. The motivation to support common, interoperable data format and delivery standards is not only to improve access for higher-level products but also to address the increasingly distributed nature of the rapidly growing volumes of data. The strength of using an OGC approach is that it provides consistent access to data that are distributed across many facilities. While data-steaming standards are well-supported by both the more sophisticated tools used in Geographic Information System (GIS) and remote sensing industries, they are also supported by many light-weight browsers which facilitates large and small focused science applications and public use. Here we provide an
Chalub, Fabricio; Braga, Christiano de Oliveira
This paper presents a modular rewriting semantics (MRS) specification for Reppy's Concurrent ML (CML), based on Peter Mosses' modular structural operational semantics specification for CML. A modular rewriting semantics specification for a programming language is a rewrite theory in rewriting log...... of rewriting logic, and to verify CML programs using Maude's built-in LTL model checker. It is assumed that the reader is familiar with basic concepts of structural operational semantics and algebraic specifications....
Aceto, Luca; 10.4204/EPTCS.32
Structural operational semantics (SOS) is a technique for defining operational semantics for programming and specification languages. Because of its intuitive appeal and flexibility, SOS has found considerable application in the study of the semantics of concurrent processes. It is also a viable alternative to denotational semantics in the static analysis of programs and in proving compiler correctness. Recently it has been applied in emerging areas such as probabilistic systems and systems biology.
Lucord, Steve; Martinez, Lindolfo
We are entering a new era in space exploration. Reduced operating budgets require innovative solutions to leverage existing systems to implement the capabilities of future missions. Custom solutions to fulfill mission objectives are no longer viable. Can NASA adopt international standards to reduce costs and increase interoperability with other space agencies? Can legacy systems be leveraged in a service oriented architecture (SOA) to further reduce operations costs? The Operations Technology Facility (OTF) at the Johnson Space Center (JSC) is collaborating with Deutsches Zentrum fur Luft- und Raumfahrt (DLR) to answer these very questions. The Mission Operations and Information Management Services Area (MOIMS) Spacecraft Monitor and Control (SM&C) Working Group within the Consultative Committee for Space Data Systems (CCSDS) is developing the Mission Operations standards to address this problem space. The set of proposed standards presents a service oriented architecture to increase the level of interoperability among space agencies. The OTF and DLR are developing independent implementations of the standards as part of an interoperability prototype. This prototype will address three key components: validation of the SM&C Mission Operations protocol, exploration of the Object Management Group (OMG) Data Distribution Service (DDS), and the incorporation of legacy systems in a SOA. The OTF will implement the service providers described in the SM&C Mission Operation standards to create a portal for interaction with a spacecraft simulator. DLR will implement the service consumers to perform the monitor and control of the spacecraft. The specifications insulate the applications from the underlying transport layer. We will gain experience with a DDS transport layer as we delegate responsibility to the middleware and explore transport bridges to connect disparate middleware products. A SOA facilitates the reuse of software components. The prototype will leverage the
Full Text Available The Internet of Things (IoT brings connectivity to about every objects found in the physical space. It extends connectivity to everyday objects. From connected fridges, cars and cities, the IoT creates opportunities in numerous domains. However, this increase in connectivity creates many prominent challenges. This paper provides a survey of some of the major issues challenging the widespread adoption of the IoT. Particularly, it focuses on the interoperability, management, security and privacy issues in the IoT. It is concluded that there is a need to develop a multifaceted technology approach to IoT security, management, and privacy.
UMTS Network Planning, Optimization, and Inter-Operation with GSM is an accessible, one-stop reference to help engineers effectively reduce the time and costs involved in UMTS deployment and optimization. Rahnema includes detailed coverage from both a theoretical and practical perspective on the planning and optimization aspects of UMTS, and a number of other new techniques to help operators get the most out of their networks. Provides an end-to-end perspective, from network design to optimizationIncorporates the hands-on experiences of numerous researchersSingle
Tuomainen, Mika; Mykkänen, Juha
Availability of personal health information for individual use from professional patient records is an important success factor for personal health information management (PHIM) solutions such as personal health records. In this paper we focus on this crucial part of personal wellbeing information management splutions and report the interoperability design of personal information import service. Key requirements as well as design factors for interfaces between PHRs and EPRs are discussed. Open standards, low implementation threshold and the acknowledgement of local market and conventions are emphasized in the design.
Shum, Dana; Reese, Mark; Pilone, Dan; Baynes, Katie
While the ISO-19115 collection level metadata format meets many users' needs for interoperable metadata, it can be cumbersome to create it correctly. Through the MMT's simple UI experience, metadata curators can create and edit collections which are compliant with ISO-19115 without full knowledge of the NASA Best Practices implementation of ISO-19115 format. Users are guided through the metadata creation process through a forms-based editor, complete with field information, validation hints and picklists. Once a record is completed, users can download the metadata in any of the supported formats with just 2 clicks.
for this report, and all former members are thanked for their efforts in contributing to the work of this TG. RTO-TR-IST-028 vii Task... enseignements tirés qui s’appliquent à deux thèmes. Les deux thèmes sont décrits en détail dans le texte et dans les rapports de références qui les...not conflicting models – the former is a natural development of the other. Beyond these options we have examined information interoperability in
The body-part term, HAND, ranks 48 in Swadesh's 100-Word List.This paper discusses the origin and meanings, and then the rules of semantic development in the HAND semantic field by comparing with other languages.The word itself does not only denote the body part but also things resembling hands in shape, position, function and things associated with hands.Plenty of linguistic evidence can be found to illustrate that all human beings regard their bodies as the basis and starting point of recognition of the whole world.
Full Text Available There is represented the general classification of semantic transfers. As the research has shown, transfers can be systematized based on four parameters: 1 the type of associations lying on their basis: similarity, contiguity and contrast, the associations by similarity and contrast being regarded as the basis for taxonomic transfers (from genus to species, from species to genus, from species to species, etc.; 2 the functional parameter: functionally relevant and irrelevant; 3 the sphere of action: transfer applies both to lexical and grammatical semantics; 4 the degree of ex-pressiveness: thus, the metonymic associations are more predictable than the metaphoric ones.
Presupposition is. a very important linguistic concept that originates from philosophy. It is often considered as a kind of pragmatic inference. In linguistics it can be classified as semantic presupposition and Pragmatic presupposition. This article will deals with semantic presupposition. Besides the most important characteristics constancy under negation, presupposition boasts some other characteristics, namely unidirectionality, subjectiveness and latency, which exactly fulfill the demands of advertising. Because presupposition, used in advertising, can not possible risk caused by ostentation or direct assertion. On this account, presupposition is adopted into advertising as a pragmatic strategy.
Graben, Peter Beim
In their target article, Wang and Busemeyer (2013) discuss question order effects in terms of incompatible projectors on a Hilbert space. In a similar vein, Blutner recently presented an orthoalgebraic query language essentially relying on dynamic update semantics. Here, I shall comment on some interesting analogies between the different variants of dynamic semantics and generalized quantum theory to illustrate other kinds of order effects in human cognition, such as belief revision, the resolution of anaphors, and default reasoning that result from the crucial non-commutativity of mental operations upon the belief state of a cognitive agent.
Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the…
Boer, M.H.T. de; Daniele, L.M.; Brandt, P.; Sappelli, M.
Abstract—With the growth of open sensor networks, multiple applications in different domains make use of a large amount of sensor data, resulting in an emerging need to search semantically over heterogeneous datasets. In semantic search, an important challenge consists of bridging the semantic gap b
Rajab Abd al-Hamed
Full Text Available An article about the semantic web, it begins with defining the semantic web and its importance, then talks about the ontology relations, then the role of the semantic web in digital libraries, and its features which will serve digital libraries.
To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. The points criticised by the semantic web and ontology approaches ar...