WorldWideScience

Sample records for intermediate xml representation

  1. XML for data representation and model specification in neuroscience.

    Science.gov (United States)

    Crook, Sharon M; Howell, Fred W

    2007-01-01

    EXtensible Markup Language (XML) technology provides an ideal representation for the complex structure of models and neuroscience data, as it is an open file format and provides a language-independent method for storing arbitrarily complex structured information. XML is composed of text and tags that explicitly describe the structure and semantics of the content of the document. In this chapter, we describe some of the common uses of XML in neuroscience, with case studies in representing neuroscience data and defining model descriptions based on examples from NeuroML. The specific methods that we discuss include (1) reading and writing XML from applications, (2) exporting XML from databases, (3) using XML standards to represent neuronal morphology data, (4) using XML to represent experimental metadata, and (5) creating new XML specifications for models.

  2. PDBML: the representation of archival macromolecular structure data in XML.

    Science.gov (United States)

    Westbrook, John; Ito, Nobutoshi; Nakamura, Haruki; Henrick, Kim; Berman, Helen M

    2005-04-01

    The Protein Data Bank (PDB) has recently released versions of the PDB Exchange dictionary and the PDB archival data files in XML format collectively named PDBML. The automated generation of these XML files is driven by the data dictionary infrastructure in use at the PDB. The correspondences between the PDB dictionary and the XML schema metadata are described as well as the XML representations of PDB dictionaries and data files.

  3. XML as a cross-platform representation for medical imaging with fuzzy algorithms.

    Science.gov (United States)

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2011-01-01

    Machines that perform linguistic medical image interpretation are based on fuzzy algorithms. There are several frameworks that can edit and simulate fuzzy algorithms, but they are not compatible with most of the implemented applications. This paper suggests a representation for fuzzy algorithms in XML files, and using this XML as a cross-platform between the simulation framework and the software applications. The paper presents a parsing algorithm that can convert files created by simulation framework, and converts them dynamically into an XML file keeping the original logical structure of the files.

  4. Teaching object concepts for XML-based representations.

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, R. L. (Robert L.)

    2002-01-01

    Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.

  5. XML Schema Representation of DICOM Structured Reporting.

    Science.gov (United States)

    Lee, K P; Hu, Jingkun

    2003-01-01

    The Digital Imaging and Communications in Medicine (DICOM) Structured Reporting (SR) standard improves the expressiveness, precision, and comparability of documentation about diagnostic images and waveforms. It supports the interchange of clinical reports in which critical features shown by images and waveforms can be denoted unambiguously by the observer, indexed, and retrieved selectively by subsequent reviewers. It is essential to provide access to clinical reports across the health care enterprise by using technologies that facilitate information exchange and processing by computers as well as provide support for robust and semantically rich standards, such as DICOM. This is supported by the current trend in the healthcare industry towards the use of Extensible Markup Language (XML) technologies for storage and exchange of medical information. The objective of the work reported here is to develop XML Schema for representing DICOM SR as XML documents. We briefly describe the document type definition (DTD) for XML and its limitations, followed by XML Schema (the intended replacement for DTD) and its features. A framework for generating XML Schema for representing DICOM SR in XML is presented next. None applicable. A schema instance based on an SR example in the DICOM specification was created and validated against the schema. The schema is being used extensively in producing reports on Philips Medical Systems ultrasound equipment. With the framework described it is feasible to generate XML Schema using the existing DICOM SR specification. It can also be applied to generate XML Schemas for other DICOM information objects.

  6. UPX: a new XML representation for annotated datasets of online handwriting data

    NARCIS (Netherlands)

    Agrawal, M.; Bali, K.; Madhvanath, S.; Vuurpijl, L.G.

    2005-01-01

    This paper introduces our efforts to create UPX, an XML-based successor to the venerable UNIPEN format for the representation of annotated datasets of online handwriting data. In the first part of the paper, shortcomins of the UNIPEN format are dicussed and the goals of UPX are outlined. Prior work

  7. XML Schema Representation of DICOM Structured Reporting

    Science.gov (United States)

    Lee, K. P.; Hu, Jingkun

    2003-01-01

    Objective: The Digital Imaging and Communications in Medicine (DICOM) Structured Reporting (SR) standard improves the expressiveness, precision, and comparability of documentation about diagnostic images and waveforms. It supports the interchange of clinical reports in which critical features shown by images and waveforms can be denoted unambiguously by the observer, indexed, and retrieved selectively by subsequent reviewers. It is essential to provide access to clinical reports across the health care enterprise by using technologies that facilitate information exchange and processing by computers as well as provide support for robust and semantically rich standards, such as DICOM. This is supported by the current trend in the healthcare industry towards the use of Extensible Markup Language (XML) technologies for storage and exchange of medical information. The objective of the work reported here is to develop XML Schema for representing DICOM SR as XML documents. Design: We briefly describe the document type definition (DTD) for XML and its limitations, followed by XML Schema (the intended replacement for DTD) and its features. A framework for generating XML Schema for representing DICOM SR in XML is presented next. Measurements: None applicable. Results: A schema instance based on an SR example in the DICOM specification was created and validated against the schema. The schema is being used extensively in producing reports on Philips Medical Systems ultrasound equipment. Conclusion: With the framework described it is feasible to generate XML Schema using the existing DICOM SR specification. It can also be applied to generate XML Schemas for other DICOM information objects. PMID:12595410

  8. Converting biomolecular modelling data based on an XML representation.

    Science.gov (United States)

    Sun, Yudong; McKeever, Steve

    2008-08-25

    Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  9. Converting Biomolecular Modelling Data Based on an XML Representation

    Directory of Open Access Journals (Sweden)

    Sun Yudong

    2008-06-01

    Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  10. Securing XML Documents

    Directory of Open Access Journals (Sweden)

    Charles Shoniregun

    2004-11-01

    Full Text Available XML (extensible markup language is becoming the current standard for establishing interoperability on the Web. XML data are self-descriptive and syntax-extensible; this makes it very suitable for representation and exchange of semi-structured data, and allows users to define new elements for their specific applications. As a result, the number of documents incorporating this standard is continuously increasing over the Web. The processing of XML documents may require a traversal of all document structure and therefore, the cost could be very high. A strong demand for a means of efficient and effective XML processing has posed a new challenge for the database world. This paper discusses a fast and efficient indexing technique for XML documents, and introduces the XML graph numbering scheme. It can be used for indexing and securing graph structure of XML documents. This technique provides an efficient method to speed up XML data processing. Furthermore, the paper explores the classification of existing methods impact of query processing, and indexing.

  11. StarDOM: From STAR format to XML

    International Nuclear Information System (INIS)

    Linge, Jens P.; Nilges, Michael; Ehrlich, Lutz

    1999-01-01

    StarDOM is a software package for the representation of STAR files as document object models and the conversion of STAR files into XML. This allows interactive navigation by using the Document Object Model representation of the data as well as easy access by XML query languages. As an example application, the entire BioMagResBank has been transformed into XML format. Using an XML query language, statistical queries on the collected NMR data sets can be constructed with very little effort. The BioMagResBank/XML data and the software can be obtained at http://www.nmr.embl-heidelberg.de/nmr/StarDOM/

  12. The ARES High-level Intermediate Representation

    Energy Technology Data Exchange (ETDEWEB)

    Moss, Nicholas David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The LLVM intermediate representation (IR) lacks semantic constructs for depicting common high-performance operations such as parallel and concurrent execution, communication and synchronization. Currently, representing such semantics in LLVM requires either extending the intermediate form (a signi cant undertaking) or the use of ad hoc indirect means such as encoding them as intrinsics and/or the use of metadata constructs. In this paper we discuss a work in progress to explore the design and implementation of a new compilation stage and associated high-level intermediate form that is placed between the abstract syntax tree and when it is lowered to LLVM's IR. This highlevel representation is a superset of LLVM IR and supports the direct representation of these common parallel computing constructs along with the infrastructure for supporting analysis and transformation passes on this representation.

  13. XML Views: Part 1

    NARCIS (Netherlands)

    Rajugan, R.; Marik, V.; Retschitzegger, W.; Chang, E.; Dillon, T.; Stepankova, O.; Feng, L.

    The exponential growth and the nature of Internet and web-based applications made eXtensible Markup Language (XML) as the de-facto standard for data exchange and data dissemination. Now it is gaining momentum in replacing conventional data models for data representation. XML with its self-describing

  14. ART-ML - a novel XML format for the biological procedures modeling and the representation of blood flow simulation.

    Science.gov (United States)

    Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I

    2010-01-01

    The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.

  15. Performance analysis of Java APIS for XML processing

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as ...

  16. Processing XML with Java – a performance benchmark

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, suc...

  17. Utilizing Structural Knowledge for Information Retrieval in XML Databases

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; Apers, Peter M.G.

    In this paper we address the problem of immediate translation of eXtensible Mark-up Language (XML) information retrieval (IR) queries to relational database expressions and stress the benefits of using an intermediate XML-specific algebra over relational algebra. We show how adding an XML-specific

  18. Bridge: Intelligent Tutoring with Intermediate Representations

    Science.gov (United States)

    1988-05-01

    Research and Development Center and Psychology Department University of Pittsburgh Pittsburgh, PA. 15260 The Artificial Intelligence and Psychology...problem never introduces more than one unfamiliar plan. Inteligent Tutoring With Intermediate Representations - Bonar and Cunniigbam 4 You must have a... Inteligent Tutoring With ntermediate Representations - Bonar and Cunningham 7 The requirements are specified at four differcnt levels, corresponding to

  19. Generating XML schemas for DICOM structured reporting templates.

    Science.gov (United States)

    Zhao, Luyin; Lee, Kwok Pun; Hu, Jingkun

    2005-01-01

    In this paper, the authors describe a methodology to transform programmatically structured reporting (SR) templates defined by the Digital Imaging and Communications for Medicine (DICOM) standard into an XML schema representation. Such schemas can be used in the creation and validation of XML-encoded SR documents that use templates. Templates are a means to put additional constraints on an SR document to promote common formats for specific reporting applications or domains. As the use of templates becomes more widespread in the production of SR documents, it is important to ensure validity of such documents. The work described in this paper is an extension of the authors' previous work on XML schema representation for DICOM SR. Therefore, this paper inherits and partially modifies the structure defined in the earlier work.

  20. Personalising e-learning modules: targeting Rasmussen levels using XML.

    Science.gov (United States)

    Renard, J M; Leroy, S; Camus, H; Picavet, M; Beuscart, R

    2003-01-01

    The development of Internet technologies has made it possible to increase the number and the diversity of on-line resources for teachers and students. Initiatives like the French-speaking Virtual Medical University Project (UMVF) try to organise the access to these resources. But both teachers and students are working on a partly redundant subset of knowledge. From the analysis of some French courses we propose a model for knowledge organisation derived from Rasmussen's stepladder. In the context of decision-making Rasmussen has identified skill-based, rule-based and knowledge-based levels for the mental process. In the medical context of problem-solving, we apply these three levels to the definition of three students levels: beginners, intermediate-level learners, experts. Based on our model, we build a representation of the hierarchical structure of data using XML language. We use XSLT Transformation Language in order to filter relevant data according to student level and to propose an appropriate display on students' terminal. The model and the XML implementation we define help to design tools for building personalised e-learning modules.

  1. NeXML: rich, extensible, and verifiable representation of comparative data and metadata.

    Science.gov (United States)

    Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-07-01

    In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.

  2. KNOWLEDGE AND XML BASED CAPP SYSTEM

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shijie; SONG Laigang

    2006-01-01

    In order to enhance the intelligent level of system and improve the interactivity with other systems, a knowledge and XML based computer aided process planning (CAPP) system is implemented. It includes user management, bill of materials(BOM) management, knowledge based process planning, knowledge management and database maintaining sub-systems. This kind of nesting knowledge representation method the system provided can represent complicated arithmetic and logical relationship to deal with process planning tasks. With the representation and manipulation of XML based technological file, the system solves some important problems in web environment such as information interactive efficiency and refreshing of web page. The CAPP system is written in ASP VBScript, JavaScript, Visual C++ languages and Oracle database. At present, the CAPP system is running in Shenyang Machine Tools. The functions of it meet the requirements of enterprise production.

  3. Design of XML-based plant data model

    International Nuclear Information System (INIS)

    Nair, Preetha M.; Padmini, S.; Gaur, Swati; Diwakar, M.P.

    2013-01-01

    XML has emerged as an open standard for exchanging structured data on various platforms to handle rich, nested, complex data structures. XML with its flexible tree-like data structure allows a more natural representation as compared to traditional databases. In this paper we present data model for plant data acquisition systems captured using XML technologies. Plant data acquisition systems in a typical Nuclear Power Plant consists of embedded nodes at the first tier and operator consoles at the second tier for operator operation, interaction and display of Plant parameters. This paper discusses a generic data model that was designed to capture process, network architecture, communication/interface protocol and diagnostics aspects required for a Nuclear Power Plant. (author)

  4. Implementasi XML Encryption (XML Enc) Menggunakan Java

    OpenAIRE

    Tenia Wahyuningrum

    2012-01-01

    Seiring dengan semakin luasnya penggunaan XML pada berbagai layanan di internet, yang penyebaran informasinya sebagian besar menggunakan infrastruktur jaringan umum, maka mulai muncul permasalahan mengenai kebutuhan akan keamanan data bagi informasi yang terkandung didalam sebuah dokumen XML. Salah satu caranya adalah dengan menggunakan teknologi XML Enc. Pada makalah ini akan dibahas mengenai cara menggunakan XML Enc menggunakan bahasa pemrograman java, khususnya menyandikan dokumen XML (enk...

  5. Design of the XML Security System for Electronic Commerce Application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The invocation of World Wide Web (www) first triggered mass adoption of the Internet for public access to digital information exchanges across the globe. To get a big market on the Web, a special security infrastructure would need to be put into place transforming the wild-and-woolly Internet into a network with end-to-end protections. XML (extensible Markup Language) is widely accepted as powerful data representation standard for electronic documents, so a security mechanism for XML documents must be provided in the first place to secure electronic commerce over Internet. In this paper the authors design and implement a secure framework that provides XML signature function, XML Element-wise Encryption function, smart card based crypto API library and Public Key Infrastructure (PKI) security functions to achieve confidentiality, integrity, message authentication, and/or signer authentication services for XML documents and existing non-XML documents that are exchanged by Internet for E-commerce application.

  6. The role of XML in the CMS detector description

    International Nuclear Information System (INIS)

    Liendl, M.; Lingen, F.van; Todorov, T.; Arce, P.; Furtjes, A.; Innocente, V.; Roeck, A. de; Case, M.

    2001-01-01

    Offline Software such as Simulation, Reconstruction, Analysis, and Visualisation are all in need of a detector description. These applications have several common but also many specific requirements for the detector description in order to build up their internal representations. To achieve this in a consistent and coherent manner a common source of information, the detector description database, will be consulted by each of the applications. The role and suitability of XML in the design of the detector description database in the scope of the CMS detector at the LHC is discussed. Different aspects such as data modelling capabilities of XML, tool support, integration to C++ representations of data models are treated and recent results of prototype implementations are presented

  7. Specifying OLAP Cubes On XML Data

    DEFF Research Database (Denmark)

    Jensen, Mikael Rune; Møller, Thomas Holmgren; Pedersen, Torben Bach

    in modern enterprises. In the data warehousing approach, selected information is extracted in advance and stored in a repository. This approach is used because of its high performance. However, in many situations a logical (rather than physical) integration of data is preferable. Previous web-based data......On-Line Analytical Processing (OLAP) enables analysts to gain insight into data through fast and interactive access to a variety of possible views on information, organized in a dimensional model. The demand for data integration is rapidly becoming larger as more and more information sources appear....... Extensible Markup Language (XML) is fast becoming the new standard for data representation and exchange on the World Wide Web. The rapid emergence of XML data on the web, e.g., business-to-business (B2B) ecommerce, is making it necessary for OLAP and other data analysis tools to handleXML data as well...

  8. XML, Ontologies, and Their Clinical Applications.

    Science.gov (United States)

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  9. Converting from XML to HDF-EOS

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program recreates an HDF-EOS file from an Extensible Markup Language (XML) representation of the contents of that file. This program is one of two programs written to enable testing of the schemas described in the immediately preceding article to determine whether the schemas capture all details of HDF-EOS files.

  10. An XML schema for automated data integration in a Multi-Source Information System dedicated to end-stage renal disease.

    Science.gov (United States)

    Dufour, Eric; Ben Saïd, Mohamed; Jais, Jean Philippe; Le Mignot, Loic; Richard, Jean-Baptiste; Landais, Paul

    2009-01-01

    Data exchange and interoperability between clinical information systems represent a crucial issue in the context of patient record data collection. An XML representation schema adapted to end-stage renal disease (ESRD) patients was developed and successfully tested against patient data in the dedicated Multi-Source Information System (MSIS) active file (more than 16,000 patient records). The ESRD-XML-Schema is organized into Schema subsets respecting the coherence of the clinical information and enriched with coherent data types. Tests are realized against XML-data files generated in conformity with the ESRD-XML Schema. Manual tests allowed the XML schema validation of the data format and content. Programmatic tests allowed the design of generic XML parsing routines, a portable object data model representation and the implementation of automatic data-exchange flows with the MSIS database system. The ESRD-XML-Schema represents a valid framework for data exchange and supports interoperability. Its modular design offers opportunity to simplify physicians' multiple tasks in order to privilege their clinical work.

  11. ScotlandsPlaces XML: Bespoke XML or XML Mapping?

    Science.gov (United States)

    Beamer, Ashley; Gillick, Mark

    2010-01-01

    Purpose: The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross-domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.…

  12. Invisible XML

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2013-01-01

    htmlabstractWhat if you could see everything as XML? XML has many strengths for data exchange, strengths both inherent in the nature of XML markup and strengths that derive from the ubiquity of tools that can process XML. For authoring, however, other forms are preferred: no one writes CSS or

  13. RelaXML

    DEFF Research Database (Denmark)

    Knudsen, Steffen Ulsø; Pedersen, Torben Bach; Thomsen, Christian

    In modern enterprises, almost all data is stored in relational databases. Additionally, most enterprises increasingly collaborate with other enterprises in long-running read-write workflows, primarily through XML-based data exchange technologies such as web services. However, bidirectional XML data...... exchange is cumbersome and must often be hand-coded, at considerable expense. This paper remedies the situation by proposing RELAXML, an automatic and effective approach to bidirectional XML-based exchange of relational data. RELAXML supports re-use through multiple inheritance, and handles both export...... of relational data to XML documents and (re-)import of XML documents with a large degree of flexibility in terms of the SQL statements and XML document structures supported. Import and export are formally defined so as to avoid semantic problems, and algorithms to implement both are given. A performance study...

  14. RelaXML

    DEFF Research Database (Denmark)

    Knudsen, Steffen Ulsø; Pedersen, Torben Bach; Thomsen, Christian

    exchange is cumbersome and must often be hand-coded, at considerable expense. This paper remedies the situation by proposing RELAXML, an automatic and effective approach to bidirectional XML-based exchange of relational data. RELAXML supports re-use through multiple inheritance, and handles both export...... of relational data to XML documents and (re-)import of XML documents with a large degree of flexibility in terms of the SQL statements and XML document structures supported. Import and export are formally defined so as to avoid semantic problems, and algorithms to implement both are given. A performance study......In modern enterprises, almost all data is stored in relational databases. Additionally, most enterprises increasingly collaborate with other enterprises in long-running read-write workflows, primarily through XML-based data exchange technologies such as web services. However, bidirectional XML data...

  15. XML Files

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xml.html MedlinePlus XML Files To use the sharing features on this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download ...

  16. phyloXML: XML for evolutionary biology and comparative genomics.

    Science.gov (United States)

    Han, Mira V; Zmasek, Christian M

    2009-10-27

    Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at http://www.phyloxml.org.

  17. Information persistence using XML database technology

    Science.gov (United States)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and

  18. QuakeML - An XML Schema for Seismology

    Science.gov (United States)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  19. Plug-and-Play XML

    Science.gov (United States)

    Schweiger, Ralf; Hoelzer, Simon; Altmann, Udo; Rieger, Joerg; Dudeck, Joachim

    2002-01-01

    The application of XML (Extensible Markup Language) is still costly. The authors present an approach to ease the development of XML applications. They have developed a Web-based framework that combines existing XML resources into a comprehensive XML application. The XML framework is model-driven, i.e., the authors primarily design XML document models (XML schema, document type definition), and users can enter, search, and view related XML documents using a Web browser. The XML model itself is flexible and might be composed of existing model standards. The second part of the paper relates the approach of the authors to some problems frequently encountered in the clinical documentation process. PMID:11751802

  20. Development Life Cycle and Tools for XML Content Models

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Buhwan, Jeong [POSTECH University, South Korea; Goyal, Puja [National Institute of Standards and Technology (NIST)

    2004-11-01

    Many integration projects today rely on shared semantic models based on standards represented using Extensible Mark up Language (XML) technologies. Shared semantic models typically evolve and require maintenance. In addition, to promote interoperability and reduce integration costs, the shared semantics should be reused as much as possible. Semantic components must be consistent and valid in terms of agreed upon standards and guidelines. In this paper, we describe an activity model for creation, use, and maintenance of a shared semantic model that is coherent and supports efficient enterprise integration. We then use this activity model to frame our research and the development of tools to support those activities. We provide overviews of these tools primarily in the context of the W3C XML Schema. At the present, we focus our work on the W3C XML Schema as the representation of choice, due to its extensive adoption by industry.

  1. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  2. An XML Representation for Crew Procedures

    Science.gov (United States)

    Simpson, Richard C.

    2005-01-01

    NASA ensures safe operation of complex systems through the use of formally-documented procedures, which encode the operational knowledge of the system as derived from system experts. Crew members use procedure documentation on the ground for training purposes and on-board space shuttle and space station to guide their activities. Investigators at JSC are developing a new representation for procedures that is content-based (as opposed to display-based). Instead of specifying how a procedure should look on the printed page, the content-based representation will identify the components of a procedure and (more importantly) how the components are related (e.g., how the activities within a procedure are sequenced; what resources need to be available for each activity). This approach will allow different sets of rules to be created for displaying procedures on a computer screen, on a hand-held personal digital assistant (PDA), verbally, or on a printed page, and will also allow intelligent reasoning processes to automatically interpret and use procedure definitions. During his NASA fellowship, Dr. Simpson examined how various industries represent procedures (also called business processes or workflows), in areas such as manufacturing, accounting, shipping, or customer service. A useful method for designing and evaluating workflow representation languages is by determining their ability to encode various workflow patterns, which depict abstract relationships between the components of a procedure removed from the context of a specific procedure or industry. Investigators have used this type of analysis to evaluate how well-suited existing workflow representation languages are for various industries based on the workflow patterns that commonly arise across industry-specific procedures. Based on this type of analysis, it is already clear that existing workflow representations capture discrete flow of control (i.e., when one activity should start and stop based on when other

  3. XML under the Hood.

    Science.gov (United States)

    Scharf, David

    2002-01-01

    Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…

  4. Semantically Interoperable XML Data.

    Science.gov (United States)

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-09-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups.

  5. Semantically Interoperable XML Data

    Science.gov (United States)

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  6. The SGML Standardization Framework and the Introduction of XML

    Science.gov (United States)

    Grütter, Rolf

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future. PMID:11720931

  7. A Simple XML Producer-Consumer Protocol

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of

  8. On the effectiveness of XML schema validation for countering XML signature wrapping attacks

    DEFF Research Database (Denmark)

    Jensen, Meiko; Meyer, Christopher; Somorovsky, Juraj

    2011-01-01

    In the context of security of Web Services, the XML Signature Wrapping attack technique has lately received increasing attention. Following a broad range of real-world exploits, general interest in applicable countermeasures rises. However, few approaches for countering these attacks have been...... investigated closely enough to make any claims about their effectiveness. In this paper, we analyze the effectiveness of the specific countermeasure of XML Schema validation in terms of fending Signature Wrapping attacks. We investigate the problems of XML Schema validation for Web Services messages......, and discuss the approach of Schema Hardening, a technique for strengthening XML Schema declarations. We conclude that XML Schema validation with a hardened XML Schema is capable of fending XML Signature Wrapping attacks, but bears some pitfalls and disadvantages as well....

  9. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O., E-mail: naito.osamu@jaea.go.j [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka, Ibaraki 311-0193 (Japan)

    2010-07-15

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  10. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    International Nuclear Information System (INIS)

    Naito, O.

    2010-01-01

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  11. XML in Libraries.

    Science.gov (United States)

    Tennant, Roy, Ed.

    This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…

  12. XML to XML through XML

    NARCIS (Netherlands)

    Lemmens, W.J.M.; Houben, G.J.P.M.

    2001-01-01

    XML documents are used to exchange data. Data exchange implies the transformation of the original data to a different structure. Often such transformations need to be adapted to some specific situation, like the rendering to non-standard platforms for display or the support of special user

  13. XML Graphs in Program Analysis

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    2007-01-01

    XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation...... of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey four very different applications: XML in Java, Java Servlets and JSP, transformations between XML and non-XML data, and XSLT....

  14. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however......, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML. We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes...... of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema....

  15. XML Graphs in Program Analysis

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael I.

    2011-01-01

    of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey the use of XML graphs for program analysis with four very different languages: XACT (XML in Java), Java Servlets (Web application programming), XSugar......XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation...

  16. Expressiveness considerations of XML signatures

    DEFF Research Database (Denmark)

    Jensen, Meiko; Meyer, Christopher

    2011-01-01

    XML Signatures are used to protect XML-based Web Service communication against a broad range of attacks related to man-in-the-middle scenarios. However, due to the complexity of the Web Services specification landscape, the task of applying XML Signatures in a robust and reliable manner becomes...... more and more challenging. In this paper, we investigate this issue, describing how an attacker can still interfere with Web Services communication even in the presence of XML Signatures. Additionally, we discuss the interrelation of XML Signatures and XML Encryption, focussing on their security...

  17. XML: Ejemplos de uso

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  18. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however......, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML. We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes...

  19. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  20. XML and its impact on content and structure in electronic health care documents.

    Science.gov (United States)

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  1. XML-Intensive software development

    OpenAIRE

    Ibañez Anfurrutia, Felipe

    2016-01-01

    168 p. 1. IntroducciónXML es un lenguaje de meta-etiquetas, es decir, puede ser utilizado fundamentalmentepara crear lenguajes de etiquetas . La presencia de XML es unfenómeno generalizado. Sin embargo, su juventud hace que los desarrolladores seenfrentan a muchos desafíos al utilizar XML en aplicaciones de vanguardia. Estatesis enfrenta XML a tres escenarios diferentes: intercambio de documentos,Líneas de Producto Software (LPS) y Lenguajes eSpecíficos de Dominio (LSD).El intercambio digi...

  2. DICOM involving XML path-tag

    Science.gov (United States)

    Zeng, Qiang; Yao, Zhihong; Liu, Lei

    2011-03-01

    Digital Imaging and Communications in Medicine (DICOM) is a standard for handling, storing, printing, and transmitting information in medical imaging. XML (Extensible Markup Language) is a set of rules for encoding documents in machine-readable form which has become more and more popular. The combination of these two is very necessary and promising. Using XML tags instead of numeric labels in DICOM files will effectively increase the readability and enhance the clear hierarchical structure of DICOM files. However, due to the fact that the XML tags rely heavily on the orders of the tags, the strong data dependency has a lot of influence on the flexibility of inserting and exchanging data. In order to improve the extensibility and sharing of DICOM files, this paper introduces XML Path-Tag to DICOM. When a DICOM file is converted to XML format, adding simple Path-Tag into the DICOM file in place of complex tags will keep the flexibility of a DICOM file while inserting data elements and give full play to the advantages of the structure and readability of an XML file. Our method can solve the weak readability problem of DICOM files and the tedious work of inserting data into an XML file. In addition, we set up a conversion engine that can transform among traditional DICOM files, XML-DCM and XML-DCM files involving XML Path-Tag efficiently.

  3. XML Based Course Websites.

    Science.gov (United States)

    Wollowski, Michael

    XML, the extensible markup language, is a quickly evolving technology that presents a viable alternative to courseware products and promises to ease the burden of Web authors, who edit their course pages directly. XML uses tags to label kinds of contents, rather than format information. The use of XML enables faculty to focus on providing…

  4. Querying XML Data with SPARQL

    Science.gov (United States)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  5. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  6. Updating Recursive XML Views of Relations

    DEFF Research Database (Denmark)

    Choi, Byron; Cong, Gao; Fan, Wenfei

    2009-01-01

    This paper investigates the view update problem for XML views published from relational data. We consider XML views defined in terms of mappings directed by possibly recursive DTDs compressed into DAGs and stored in relations. We provide new techniques to efficiently support XML view updates...... specified in terms of XPath expressions with recursion and complex filters. The interaction between XPath recursion and DAG compression of XML views makes the analysis of the XML view update problem rather intriguing. Furthermore, many issues are still open even for relational view updates, and need...... to be explored. In response to these, on the XML side, we revise the notion of side effects and update semantics based on the semantics of XML views, and present effecient algorithms to translate XML updates to relational view updates. On the relational side, we propose a mild condition on SPJ views, and show...

  7. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  8. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  9. Intelligent Search on XML Data

    NARCIS (Netherlands)

    Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.; Unknown, [Unknown

    2003-01-01

    Recently, we have seen a steep increase in the popularity and adoption of XML, in areas such as traditional databases, e-business, the scientific environment, and on the web. Querying XML documents and data efficiently is a challenging issue; this book approaches search on XML data by combining

  10. Storing XML Documents in Databases

    NARCIS (Netherlands)

    A.R. Schmidt; S. Manegold (Stefan); M.L. Kersten (Martin); L.C. Rivero; J.H. Doorn; V.E. Ferraggine

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML

  11. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2008-01-01

    of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema....

  12. Semantic validation of standard-based electronic health record documents with W3C XML schema.

    Science.gov (United States)

    Rinner, C; Janzek-Hawlat, S; Sibinovic, S; Duftschmid, G

    2010-01-01

    The goal of this article is to examine whether W3C XML Schema provides a practicable solution for the semantic validation of standard-based electronic health record (EHR) documents. With semantic validation we mean that the EHR documents are checked for conformance with the underlying archetypes and reference model. We describe an approach that allows XML Schemas to be derived from archetypes based on a specific naming convention. The archetype constraints are augmented with additional components of the reference model within the XML Schema representation. A copy of the EHR document that is transformed according to the before-mentioned naming convention is used for the actual validation against the XML Schema. We tested our approach by semantically validating EHR documents conformant to three different ISO/EN 13606 archetypes respective to three sections of the CDA implementation guide "Continuity of Care Document (CCD)" and an implementation guide for diabetes therapy data. We further developed a tool to automate the different steps of our semantic validation approach. For two particular kinds of archetype prescriptions, individual transformations are required for the corresponding EHR documents. Otherwise, a fully generic validation is possible. In general, we consider W3C XML Schema as a practicable solution for the semantic validation of standard-based EHR documents.

  13. Using XML to encode TMA DES metadata

    Directory of Open Access Journals (Sweden)

    Oliver Lyttleton

    2011-01-01

    Full Text Available Background: The Tissue Microarray Data Exchange Specification (TMA DES is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  14. Using XML to encode TMA DES metadata.

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  15. Using XML to encode TMA DES metadata

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921

  16. XML documents cluster research based on frequent subpatterns

    Science.gov (United States)

    Ding, Tienan; Li, Wei; Li, Xiongfei

    2015-12-01

    XML data is widely used in the information exchange field of Internet, and XML document data clustering is the hot research topic. In the XML document clustering process, measure differences between two XML documents is time costly, and impact the efficiency of XML document clustering. This paper proposed an XML documents clustering method based on frequent patterns of XML document dataset, first proposed a coding tree structure for encoding the XML document, and translate frequent pattern mining from XML documents into frequent pattern mining from string. Further, using the cosine similarity calculation method and cohesive hierarchical clustering method for XML document dataset by frequent patterns. Because of frequent patterns are subsets of the original XML document data, so the time consumption of XML document similarity measure is reduced. The experiment runs on synthetic dataset and the real datasets, the experimental result shows that our method is efficient.

  17. XML and Better Web Searching.

    Science.gov (United States)

    Jackson, Joe; Gilstrap, Donald L.

    1999-01-01

    Addresses the implications of the new Web metalanguage XML for searching on the World Wide Web and considers the future of XML on the Web. Compared to HTML, XML is more concerned with structure of data than documents, and these data structures should prove conducive to precise, context rich searching. (Author/LRW)

  18. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  19. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  20. XML Transformations

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2012-04-01

    Full Text Available XSLT style sheets are designed to transform the XML documents into something else. The two most popular parsers of the moment are the Document Object Model (DOM and the Simple API for XML (SAX. DOM is an official recommendation of the W3C (available at http://www.w3.org/TR/REC-DOM-Level-1, while SAX is a de facto standard. A good parser should be fast, space efficient, rich in functionality and easy to use.

  1. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  2. Beginning XML, 5th Edition

    CERN Document Server

    Fawcett, Joe; Quin, Liam R E

    2012-01-01

    A complete update covering the many advances to the XML language The XML language has become the standard for writing documents on the Internet and is constantly improving and evolving. This new edition covers all the many new XML-based technologies that have appeared since the previous edition four years ago, providing you with an up-to-date introductory guide and reference. Packed with real-world code examples, best practices, and in-depth coverage of the most important and relevant topics, this authoritative resource explores both the advantages and disadvantages of XML and addresses the mo

  3. XML technology planning database : lessons learned

    Science.gov (United States)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  4. Desain Sistem Keamanan Distribusi Data Dengan Menerapkan XML Encryption Dan XML Signature Berbasis Teknologi Web Service

    Directory of Open Access Journals (Sweden)

    Slamet Widodo

    2012-01-01

    Full Text Available Development of information technologies is often misused by an organization or a person to take criminal acts, such as the ability to steal and modify information in the data distribution for evil criminal purpose. The Rural Bank of Boyolali is conducting online financial transactions rather intensively, thus it requiring a security system on the distribution of data and credit transactions for their customer among branches offices to head office. The purpose of this study was to build a security system in credit transactions in Rural Bank of Boyolali for their customers among branches offices to head office. One way in protecting data distribution was used XML encryption and XML signature. The application of encryption technique in XML and digital signature in XML by using web service by using the AES (Advanced Encryption Standard and RSA (Rivest-Shamir-Adleman algorithms. This study was resulted the SOAP (Simple Object Access Protocol message security system, with XML and WSDL (Web Services Description Language, over HTTP (Hypertext Transfer Protocol to protect the customers’ credit transactions from intruders. Analysis of examination indicated that the data size (bytes transferred as results of uncompressed XML encryption were larger than compressed XML Encryption, which leads to significant changes between the data transferred that was the processing time of the compressed data was faster than uncompressed XML encryption.

  5. Towards the XML schema measurement based on mapping between XML and OO domain

    Science.gov (United States)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  6. The Graphical Representation of the Digital Astronaut Physiology Backbone

    Science.gov (United States)

    Briers, Demarcus

    2010-01-01

    This report summarizes my internship project with the NASA Digital Astronaut Project to analyze the Digital Astronaut (DA) physiology backbone model. The Digital Astronaut Project (DAP) applies integrated physiology models to support space biomedical operations, and to assist NASA researchers in closing knowledge gaps related to human physiologic responses to space flight. The DA physiology backbone is a set of integrated physiological equations and functions that model the interacting systems of the human body. The current release of the model is HumMod (Human Model) version 1.5 and was developed over forty years at the University of Mississippi Medical Center (UMMC). The physiology equations and functions are scripted in an XML schema specifically designed for physiology modeling by Dr. Thomas G. Coleman at UMMC. Currently it is difficult to examine the physiology backbone without being knowledgeable of the XML schema. While investigating and documenting the tags and algorithms used in the XML schema, I proposed a standard methodology for a graphical representation. This standard methodology may be used to transcribe graphical representations from the DA physiology backbone. In turn, the graphical representations can allow examination of the physiological functions and equations without the need to be familiar with the computer programming languages or markup languages used by DA modeling software.

  7. Automata, Logic, and XML

    OpenAIRE

    NEVEN, Frank

    2002-01-01

    We survey some recent developments in the broad area of automata and logic which are motivated by the advent of XML. In particular, we consider unranked tree automata, tree-walking automata, and automata over infinite alphabets. We focus on their connection with logic and on questions imposed by XML.

  8. Speed up of XML parsers with PHP language implementation

    Science.gov (United States)

    Georgiev, Bozhidar; Georgieva, Adriana

    2012-11-01

    In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.

  9. XML and Free Text.

    Science.gov (United States)

    Riggs, Ken Roger

    2002-01-01

    Discusses problems with marking free text, text that is either natural language or semigrammatical but unstructured, that prevent well-formed XML from marking text for readily available meaning. Proposes a solution to mark meaning in free text that is consistent with the intended simplicity of XML versus SGML. (Author/LRW)

  10. Compressing Aviation Data in XML Format

    Science.gov (United States)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  11. XML in Projects GNU Gama and 3DGI

    DEFF Research Database (Denmark)

    Kolar, Jan; Soucek, Petr; Cepek, Ales

    2003-01-01

    This paper presents our practical experiences with XML in geodetic and geographical applications. The main concepts and ideas of XML are introduced in an example of a simple web based information system, which exploits the XHTML language. The article further describes how XML is used in GNU Gama...... for structuring data for a geodetic network adjustment. In another application of XML, it is demonstrated how XML can be used for a unified description of data from leveling registration units. Finally, the use of XML for modelling 3D geographical features within the 3DGI project is presented and a relation...

  12. Integrity Based Access Control Model for Multilevel XML Document

    Institute of Scientific and Technical Information of China (English)

    HONG Fan; FENG Xue-bin; HUANO Zhi; ZHENG Ming-hui

    2008-01-01

    XML's increasing popularity highlights the security demand for XML documents. A mandatory access control model for XML document is presented on the basis of investigation of the function dependency of XML documents and discussion of the integrity properties of multilevel XML document. Then, the algorithms for decomposition/recovery multilevel XML document into/from single level document are given, and the manipulation rules for typical operations of XQuery and XUpdate: QUERY, INSERT,UPDATE, and REMOVE, are elaborated. The multilevel XML document access model can meet the requirement of sensitive information processing application.

  13. XML: Ejemplos de uso (presentación)

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  14. The Cadmio XML healthcare record.

    Science.gov (United States)

    Barbera, Francesco; Ferri, Fernando; Ricci, Fabrizio L; Sottile, Pier Angelo

    2002-01-01

    The management of clinical data is a complex task. Patient related information reported in patient folders is a set of heterogeneous and structured data accessed by different users having different goals (in local or geographical networks). XML language provides a mechanism for describing, manipulating, and visualising structured data in web-based applications. XML ensures that the structured data is managed in a uniform and transparent manner independently from the applications and their providers guaranteeing some interoperability. Extracting data from the healthcare record and structuring them according to XML makes the data available through browsers. The MIC/MIE model (Medical Information Category/Medical Information Elements), which allows the definition and management of healthcare records and used in CADMIO, a HISA based project, is described in this paper, using XML for allowing the data to be visualised through web browsers.

  15. XML Flight/Ground Data Dictionary Management

    Science.gov (United States)

    Wright, Jesse; Wiklow, Colette

    2007-01-01

    A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.

  16. XML Publishing with Adobe InDesign

    CERN Document Server

    Hoskins, Dorothy

    2010-01-01

    From Adobe InDesign CS2 to InDesign CS5, the ability to work with XML content has been built into every version of InDesign. Some of the useful applications are importing database content into InDesign to create catalog pages, exporting XML that will be useful for subsequent publishing processes, and building chunks of content that can be reused in multiple publications. In this Short Cut, we'll play with the contents of a college course catalog and see how we can use XML for course descriptions, tables, and other content. Underlying principles of XML structure, DTDs, and the InDesign namesp

  17. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  18. XML, TEI, and Digital Libraries in the Humanities.

    Science.gov (United States)

    Nellhaus, Tobin

    2001-01-01

    Describes the history and major features of XML and TEI, discusses their potential utility for the creation of digital libraries, and focuses on XML's application in the humanities, particularly theater and drama studies. Highlights include HTML and hyperlinks; the impact of XML on text encoding and document access; and XML and academic…

  19. An Introduction to the Extensible Markup Language (XML).

    Science.gov (United States)

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  20. Statistical Language Models for Intelligent XML Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database

  1. Statistical language Models for Intelligent XML Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Blanken, H.M.; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database

  2. Assessing XML Data Management with XMark

    OpenAIRE

    Schmidt, A.R.; Waas, F.; Kersten, Martin; Carey, M.J.; Manolescu, I.; Busse, R.

    2002-01-01

    textabstractWe discuss some of the experiences we gathered during the development and deployment of XMark, a tool to assess the infrastructure and performance of XML Data Management Systems. Since the appearance of the first XML database prototypes in research institutions and development labs, topics like validation, performance evaluation and optimization of XML query processors have received significant interest. The XMark benchmark follows a tradition in database research and provides a f...

  3. Static Analysis of XML Transformations in Java

    DEFF Research Database (Denmark)

    Kirkegaard, Christian; Møller, Anders; Schwartzbach, Michael I.

    2004-01-01

    of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce Xact, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values...

  4. "The Wonder Years" of XML.

    Science.gov (United States)

    Gazan, Rich

    2000-01-01

    Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…

  5. XML specifications DanRIS

    DEFF Research Database (Denmark)

    2009-01-01

    XML specifications for DanRIS (Danish Registration- og InformationsSystem), where the the aim is: Improved exchange of data Improved data processing Ensuring future access to all gathered data from the year 1999 until now......XML specifications for DanRIS (Danish Registration- og InformationsSystem), where the the aim is: Improved exchange of data Improved data processing Ensuring future access to all gathered data from the year 1999 until now...

  6. The OLAP-XML Federation System

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    We present the logical “OLAP-XML Federation System” that enables the external data available in XML format to be used as virtual dimensions. Unlike the complex and time-consuming physical integration of OLAP and external data in current OLAP systems, our system makes OLAP queries referencing fast...

  7. δ-dependency for privacy-preserving XML data publishing.

    Science.gov (United States)

    Landberg, Anders H; Nguyen, Kinh; Pardede, Eric; Rahayu, J Wenny

    2014-08-01

    An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1-3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3(1)) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ-dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to "quasi-identifiers"). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis

  8. Algebra-Based Optimization of XML-Extended OLAP Queries

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    In today’s OLAP systems, integrating fast changing data, e.g., stock quotes, physically into a cube is complex and time-consuming. The widespread use of XML makes it very possible that this data is available in XML format on the WWW; thus, making XML data logically federated with OLAP systems...... is desirable. This report presents a complete foundation for such OLAP-XML federations. This includes a prototypical query engine, a simplified query semantics based on previous work, and a complete physical algebra which enables precise modeling of the execution tasks of an OLAP-XML query. Effective algebra...

  9. How Does XML Help Libraries?

    Science.gov (United States)

    Banerjee, Kyle

    2002-01-01

    Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…

  10. Principles of reusability of XML-based enterprise documents

    Directory of Open Access Journals (Sweden)

    Roman Malo

    2010-01-01

    Full Text Available XML (Extensible Markup Language represents one of flexible platforms for processing enterprise documents. Its simple syntax and powerful software infrastructure for processing this type of documents is a guarantee for high interoperability of individual documents. XML is today one of technologies influencing all aspects of ICT area.In the paper questions and basic principles of reusing XML-based documents are described in the field of enterprise documents. If we use XML databases or XML data types for storing these types of documents then partial redundancy could be expected due to possible documents’ similarity. This similarity can be found especially in documents’ structure and also in documents’ content and its elimination is necessary part of data optimization.The main idea of the paper is focused to possibilities how to think about dividing complex XML docu­ments into independent fragments that can be used as standalone documents and how to process them.Conclusions could be applied within software tools working with XML-based structured data and documents as document management systems or content management systems.

  11. An XML-hierarchical data structure for ENSDF

    International Nuclear Information System (INIS)

    Hurst, Aaron M.

    2016-01-01

    A data structure based on an eXtensible Markup Language (XML) hierarchy according to experimental nuclear structure data in the Evaluated Nuclear Structure Data File (ENSDF) is presented. A Python-coded translator has been developed to interpret the standard one-card records of the ENSDF datasets, together with their associated quantities defined according to field position, and generate corresponding representative XML output. The quantities belonging to this mixed-record format are described in the ENSDF manual. Of the 16 ENSDF records in total, XML output has been successfully generated for 15 records. An XML-translation for the Comment Record is yet to be implemented; this will be considered in a separate phase of the overall translation effort. Continuation records, not yet implemented, will also be treated in a future phase of this work. Several examples are presented in this document to illustrate the XML schema and methods for handling the various ENSDF data types. However, the proposed nomenclature for the XML elements and attributes need not necessarily be considered as a fixed set of constructs. Indeed, better conventions may be suggested and a consensus can be achieved amongst the various groups of people interested in this project. The main purpose here is to present an initial phase of the translation effort to demonstrate the feasibility of interpreting ENSDF datasets and creating a representative XML-structured hierarchy for data storage.

  12. XML Schema Languages: Beyond DTD.

    Science.gov (United States)

    Ioannides, Demetrios

    2000-01-01

    Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)

  13. XML and E-Journals: The State of Play.

    Science.gov (United States)

    Wusteman, Judith

    2003-01-01

    Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…

  14. A Survey in Indexing and Searching XML Documents.

    Science.gov (United States)

    Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James

    2002-01-01

    Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…

  15. XML Syntax for Clinical Laboratory Procedure Manuals

    OpenAIRE

    Saadawi, Gilan; Harrison, James H.

    2003-01-01

    We have developed a document type description (DTD) in Extensable Markup Language (XML)1 for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large b...

  16. How will XML impact industrial automation?

    CERN Multimedia

    Pinceti, P

    2002-01-01

    A working group of the World Wide Web Consortium (W3C) has overcome the limits of both HTML and SGML with the definition of the extensible markup language - XML. This article looks at how XML will affect industrial automation (2 pages).

  17. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  18. System architecture with XML

    CERN Document Server

    Daum, Berthold

    2002-01-01

    XML is bringing together some fairly disparate groups into a new cultural clash: document developers trying to understand what a transaction is, database analysts getting upset because the relational model doesn''t fit anymore, and web designers having to deal with schemata and rule based transformations. The key to rising above the confusion is to understand the different semantic structures that lie beneath the standards of XML, and how to model the semantics to achieve the goals of the organization. A pure architecture of XML doesn''t exist yet, and it may never exist as the underlying technologies are so diverse. Still, the key to understanding how to build the new web infrastructure for electronic business lies in understanding the landscape of these new standards.If your background is in document processing, this book will show how you can use conceptual modeling to model business scenarios consisting of business objects, relationships, processes, and transactions in a document-centric way. Database des...

  19. Shuttle-Data-Tape XML Translator

    Science.gov (United States)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  20. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  1. XML-RPC技术及其应用分析%Analysis of XML-RPC Technology and Its Application

    Institute of Scientific and Technical Information of China (English)

    姚鹤岭

    2005-01-01

    为了说明XML-RPC技术在特定场合的应用价值,介绍了基于XML语言的XML-RPC分布式技术的概念与特点,在编写Meerkat客户端程序时,使用Python语言实现了类似ArcWeb服务的功能.研究表明:XML-RPC技术在一定条件下能够很好地满足不同应用间的通信与互操作的需求.

  2. Towards an Ontology for the Global Geodynamics Project: Automated Extraction of Resource Descriptions from an XML-Based Data Model

    Science.gov (United States)

    Lumb, L. I.; Aldridge, K. D.

    2005-12-01

    Using the Earth Science Markup Language (ESML), an XML-based data model for the Global Geodynamics Project (GGP) was recently introduced [Lumb & Aldridge, Proc. HPCS 2005, Kotsireas & Stacey, eds., IEEE, 2005, 216-222]. This data model possesses several key attributes -i.e., it: makes use of XML schema; supports semi-structured ASCII format files; includes Earth Science affinities; and is on track for compliance with emerging Grid computing standards (e.g., the Global Grid Forum's Data Format Description Language, DFDL). Favorable attributes notwithstanding, metadata (i.e., data about data) was identified [Lumb & Aldridge, 2005] as a key challenge for progress in enabling the GGP for Grid computing. Even in projects of small-to-medium scale like the GGP, the manual introduction of metadata has the potential to be the rate-determining metric for progress. Fortunately, an automated approach for metadata introduction has recently emerged. Based on Gleaning Resource Descriptions from Dialects of Languages (GRDDL, http://www.w3.org/2004/01/rdxh/spec), this bottom-up approach allows for the extraction of Resource Description Format (RDF) representations from the XML-based data model (i.e., the ESML representation of GGP data) subject to rules of transformation articulated via eXtensible Stylesheet Language Transformations (XSLT). In addition to introducing relationships into the GGP data model, and thereby addressing the metadata requirement, the syntax and semantics of RDF comprise a requisite for a GGP ontology - i.e., ``the common words and concepts (the meaning) used to describe and represent an area of knowledge'' [Daconta et al., The Semantic Web, Wiley, 2003]. After briefly reviewing the XML-based model for the GGP, attention focuses on the automated extraction of an RDF representation via GRDDL with XSLT-delineated templates. This bottom-up approach, in tandem with a top-down approach based on the Protege integrated development environment for ontologies (http

  3. Sample Scripts for Generating PaGE-OM XML [

    Lifescience Database Archive (English)

    Full Text Available Sample Scripts for Generating PaGE-OM XML This page is offering some sample scripts...on MySQL. Outline chart of procedure 6. Creating RDB tables for Generating PaGE-OM XML These scripts help yo...wnload: create_tables_sql2.zip 7. Generating PaGE-OM XML from phenotype data This sample Perl script helps y

  4. XML a bezpečnost I

    Czech Academy of Sciences Publication Activity Database

    Brechlerová, Dagmar

    2007-01-01

    Roč. 9, č. 1 (2007), s. 13-25 ISSN 1801-2140 R&D Projects: GA AV ČR 1ET200300413 Institutional research plan: CEZ:AV0Z10300504 Keywords : XML security * XML digitální podpis * XKMS Subject RIV: IN - Informatics, Computer Science http://crypto-world.info/index2.php

  5. The curse of namespaces in the domain of XML signature

    DEFF Research Database (Denmark)

    Jensen, Meiko; Liao, Lijun; Schwenk, Jörg

    2009-01-01

    The XML signature wrapping attack is one of the most discussed security issues of the Web Services security community during the last years. Until now, the issue has not been solved, and all countermeasure approaches proposed so far were shown to be insufficient. In this paper, we present yet...... another way to perform signature wrapping attacks by using the XML namespace injection technique. We show that the interplay of XML Signature, XPath, and the XML namespace concept has severe flaws that can be exploited for an attack, and that XML namespaces in general pose real troubles to digital...... signatures in the XML domain. Additionally, we present and discuss some new approaches in countering the proposed attack vector....

  6. Representing User Navigation in XML Retrieval with Structural Summaries

    DEFF Research Database (Denmark)

    Ali, M. S.; Consens, Mariano P.; Larsen, Birger

    This poster presents a novel way to represent user navigation in XML retrieval using collection statistics from XML summaries. Currently, developing user navigation models in XML retrieval is costly and the models are specific to collected user assessments. We address this problem by proposing...

  7. Static Analysis for Event-Based XML Processing

    DEFF Research Database (Denmark)

    Møller, Anders

    2008-01-01

    Event-based processing of XML data - as exemplified by the popular SAX framework - is a powerful alternative to using W3C's DOM or similar tree-based APIs. The event-based approach is a streaming fashion with minimal memory consumption. This paper discusses challenges for creating program analyses...... for SAX applications. In particular, we consider the problem of statically guaranteeing the a given SAX program always produces only well-formed and valid XML output. We propose an analysis technique based on ecisting anglyses of Servlets, string operations, and XML graphs....

  8. Building adaptable and reusable XML applications with model transformations

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas

    2005-01-01

    We present an approach in which the semantics of an XML language is defined by means of a transformation from an XML document model (an XML schema) to an application specific model. The application specific model implements the intended behavior of documents written in the language. A transformation

  9. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  10. The Big Bang - XML expanding the information universe

    International Nuclear Information System (INIS)

    Rutt, S.; Chamberlain, M.; Buckley, G.

    2004-01-01

    The XML language is discussed as a tool in the information management. Industries are adopting XML as a means of making disparate systems talk with each other or as a means of swapping information between different organisations and different operating systems by using a common set of mark-up. More important to this discussion is the ability to use XML within the field of Technical Documentation and Publication. The capabilities of XML in work with different types of documents are presented. In conclusion, a summary is given of the benefits of using an XML solution: Precisely match your requirements at no more initial cost; Single Source Dynamic Content Delivery and Management; 100% of authors time is spent creating content; Content is no longer locked into its format; Reduced hardware and data storage requirements; Content survives the publishing lifecycle; Auto-versioning/release management control; Workflows can be mapped and electronic audit trails made

  11. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  12. TIJAH: Embracing IR Methods in XML Databases

    NARCIS (Netherlands)

    List, Johan; Mihajlovic, V.; Ramirez, Georgina; de Vries, A.P.; Hiemstra, Djoerd; Blok, H.E.

    2005-01-01

    This paper discusses our participation in INEX (the Initiative for the Evaluation of XML Retrieval) using the TIJAH XML-IR system. TIJAH's system design follows a `standard' layered database architecture, carefully separating the conceptual, logical and physical levels. At the conceptual level, we

  13. A comparison of database systems for XML-type data

    NARCIS (Netherlands)

    Risse, J.E.; Leunissen, J.A.M.

    2010-01-01

    Background: In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability

  14. MHD intermediate shock discontinuities: Pt. 1

    International Nuclear Information System (INIS)

    Kennel, C.F.; Blandford, R.D.; Coppi, P.

    1989-01-01

    Recent numerical investigations have focused attention once more on the role of intermediate shocks in MHD. Four types of intermediate shock are identified using a graphical representation of the MHD Rankine-Hugoniot conditions. This same representation can be used to exhibit the close relationship of intermediate shocks to switch-on shocks and rotational discontinuities. The conditions under which intermediate discontinuities can be found are elucidated. The variations in velocity, pressure, entropy and magnetic-field jumps with upstream parameters in intermediate shocks are exhibited graphically. The evolutionary arguments traditionally advanced against intermediate shocks may fail because the equations of classical MHD are not strictly hyperbolic. (author)

  15. An XML-based framework for personalized health management.

    Science.gov (United States)

    Lee, Hiye-Ja; Park, Seung-Hun; Jeong, Byeong-Soo

    2006-01-01

    This paper proposes a framework for personalized health management. In this framework, XML technology is used for representing and managing the health information and knowledge. Major components of the framework are Health Management Prescription (HMP) Expert System and Health Information Repository. The HMP Expert System generates a HMP efficiently by using XML-based templates. Health Information Repository provides integrated health information and knowledge for personalized health management by using XML and relational database together.

  16. Exploring PSI-MI XML Collections Using DescribeX

    Directory of Open Access Journals (Sweden)

    Samavi Reza

    2007-12-01

    Full Text Available PSI-MI has been endorsed by the protein informatics community as a standard XML data exchange format for protein-protein interaction datasets. While many public databases support the standard, there is a degree of heterogeneity in the way the proposed XML schema is interpreted and instantiated by different data providers. Analysis of schema instantiation in large collections of XML data is a challenging task that is unsupported by existing tools.

  17. An Introduction to XML and Web Technologies

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    , building on top of the early foundations. This book offers a comprehensive introduction to the area. There are two main threads of development, corresponding to the two parts of this book. XML technologies generalize the notion of data on the Web from hypertext documents to arbitrary data, including those...... that have traditionally been the realm of databases. In this book we cover the basic XML technology and the supporting technologies of XPath, DTD, XML Schema, DSD2, RELAX NG, XSLT, XQuery, DOM, JDOM, JAXB, SAX, STX, SDuce, and XACT. Web technologies build on top of the HTTP protocol to provide richer...

  18. An XML-Enabled Data Mining Query Language XML-DMQL

    NARCIS (Netherlands)

    Feng, L.; Dillon, T.

    2005-01-01

    Inspired by the good work of Han et al. (1996) and Elfeky et al. (2001) on the design of data mining query languages for relational and object-oriented databases, in this paper, we develop an expressive XML-enabled data mining query language by extension of XQuery. We first describe some

  19. XML-based DICOM data format.

    Science.gov (United States)

    Yu, Cong; Yao, Zhihong

    2010-04-01

    To enhance the readability, improve the structure, and facilitate the sharing of digital imaging and communications in medicine (DICOM) files, this research proposed one kind of XML-based DICOM data format. Because XML Schema offers great flexibility for expressing constraints on the content model of elements, we used it to describe the new format, thus making it consistent with the one originally defined by DICOM. Meanwhile, such schemas can be used in the creation and validation of the XML-encoded DICOM files, acting as a standard for data transmission and sharing on the Web. Upon defining the new data format, we started with representing a single data element and further improved the whole data structure with the method of modularization. In contrast to the original format, the new one possesses better structure without loss of related information. In addition, we demonstrated the application of XSLT and XQuery. All of the advantages mentioned above resulted from this new data format.

  20. ANALISIS KOMUNIKASI DATA DENGAN XML DAN JSON PADA WEBSERVICE

    Directory of Open Access Journals (Sweden)

    Sudirman M.Kom

    2016-08-01

    Full Text Available Abstrak— Ukuran data pada proses komunikasi data menggunakan web service dalam jaringan akan sangat memengaruhi kecepatan proses transfer. XML dan JSON merupakan format data yang digunakan pada saat komunikasi data pada web service. JSON akan menghasilkan ukuran data yang lebih kecil jika dibandingkan dengan format XML. Keywords— komunikasi data, web service, XML, JSON.

  1. Flight Dynamic Model Exchange using XML

    Science.gov (United States)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  2. TX-Kw: An Effective Temporal XML Keyword Search

    OpenAIRE

    Rasha Bin-Thalab; Neamat El-Tazi; Mohamed E.El-Sharkawi

    2013-01-01

    Inspired by the great success of information retrieval (IR) style keyword search on the web, keyword search on XML has emerged recently. Existing methods cannot resolve challenges addressed by using keyword search in Temporal XML documents. We propose a way to evaluate temporal keyword search queries over Temporal XML documents. Moreover, we propose a new ranking method based on the time-aware IR ranking methods to rank temporal keyword search queries results. Extensive experiments have been ...

  3. XML representation and management of temporal information for web-based cultural heritage applications

    Directory of Open Access Journals (Sweden)

    Fabio Grandi

    2006-01-01

    Full Text Available In this paper we survey the recent activities and achievements of our research group in the deployment of XMLrelated technologies in Cultural Heritage applications concerning the encoding of temporal semantics in Web documents. In particular we will review "The Valid Web", which is an XML/XSL infrastructure we defined and implemented for the definition and management of historical information within multimedia documents available on the Web, and its further extension to the effective encoding of advanced temporal features like indeterminacy, multiple granularities and calendars, enabling an efficient processing in a user-friendly Web-based environment. Potential uses of the developed infrastructures include a broad range of applications in the cultural heritage domain, where the historical perspective is relevant, with potentially positive impacts on E-Education and E-Science.

  4. XAL: An algebra for XML query optimization

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Pau, C.D.; Zhou, Xiaofang

    2002-01-01

    This paper proposes XAL, an XML ALgebra. Its novelty is based on the simplicity of its data model and its well-defined logical operators, which makes it suitable for composability, optimizability, and semantics definition of a query language for XML data. At the heart of the algebra resides the

  5. Generalized intermediate long-wave hierarchy in zero-curvature representation with noncommutative spectral parameter

    Science.gov (United States)

    Degasperis, A.; Lebedev, D.; Olshanetsky, M.; Pakuliak, S.; Perelomov, A.; Santini, P. M.

    1992-11-01

    The simplest generalization of the intermediate long-wave hierarchy (ILW) is considered to show how to extend the Zakharov-Shabat dressing method to nonlocal, i.e., integro-partial differential, equations. The purpose is to give a procedure of constructing the zero-curvature representation of this class of equations. This result obtains by combining the Drinfeld-Sokolov formalism together with the introduction of an operator-valued spectral parameter, namely, a spectral parameter that does not commute with the space variable x. This extension provides a connection between the ILWk hierarchy and the Saveliev-Vershik continuum graded Lie algebras. In the case of ILW2 the Fairlie-Zachos sinh-algebra was found.

  6. XML-kieliperhe tietokannan hallintajärjestelmien näkökulmasta

    OpenAIRE

    Imeläinen, Jani

    2006-01-01

    Tutkielmassa tarkastellaan XML-kieliperheen määrityksiä tietokannan hallintajärjestelmien näkökulmasta. Tutkielmassa verrataan XML-määrityksiä tietokannan hallintajärjestelmien peruskäsitteistöön ja esitellään näin rajaten olennaisimmat XML-määritykset. Päätavoitteena on selvittää XML-kieliperheen määritysten merkitys ja rooli XML-dokumenttien käsittelyssä tietokannan hallintajärjestelmissä. Tutkielman keskeinen tulos on viitekehys, jossa havainnollistetaan tietokannan halli...

  7. Integrating XML Data in the TARGIT OLAP System

    DEFF Research Database (Denmark)

    Pedersen, Dennis; Pedersen, Jesper; Pedersen, Torben Bach

    2004-01-01

    This paper presents work on logical integration of OLAP and XML data sources, carried out in cooperation between TARGIT, a Danish OLAP client vendor, and Aalborg University. A prototype has been developed that allows XML data on the WWW to be used as dimensions and measures in the OLAP system...... the ability to use XML data as measures, as well as a novel multigranular data model and query language that formalizes and extends the TARGIT data model and query language....

  8. Embedded XML DOM Parser: An Approach for XML Data Processing on Networked Embedded Systems with Real-Time Requirements

    Directory of Open Access Journals (Sweden)

    Cavia Soto MAngeles

    2008-01-01

    Full Text Available Abstract Trends in control and automation show an increase in data processing and communication in embedded automation controllers. The eXtensible Markup Language (XML is emerging as a dominant data syntax, fostering interoperability, yet little is still known about how to provide predictable real-time performance in XML processing, as required in the domain of industrial automation. This paper presents an XML processor that is designed with such real-time performance in mind. The publication attempts to disclose insight gained in applying techniques such as object pooling and reuse, and other methods targeted at avoiding dynamic memory allocation and its consequent memory fragmentation. Benchmarking tests are reported in order to illustrate the benefits of the approach.

  9. XML — an opportunity for data standards in the geosciences

    Science.gov (United States)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  10. Get It Together: Integrating Data with XML.

    Science.gov (United States)

    Miller, Ron

    2003-01-01

    Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…

  11. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  12. Domain XML semantic integration based on extraction rules and ontology mapping

    Directory of Open Access Journals (Sweden)

    Huayu LI

    2016-08-01

    Full Text Available A plenty of XML documents exist in petroleum engineering field, but traditional XML integration solution can’t provide semantic query, which leads to low data use efficiency. In light of WeXML(oil&gas well XML data semantic integration and query requirement, this paper proposes a semantic integration method based on extraction rules and ontology mapping. The method firstly defines a series of extraction rules with which elements and properties of WeXML Schema are mapped to classes and properties in WeOWL ontology, respectively; secondly, an algorithm is used to transform WeXML documents into WeOWL instances. Because WeOWL provides limited semantics, ontology mappings between two ontologies are then built to explain class and property of global ontology with terms of WeOWL, and semantic query based on global domain concepts model is provided. By constructing a WeXML data semantic integration prototype system, the proposed transformational rule, the transfer algorithm and the mapping rule are tested.

  13. Modeling the Arden Syntax for medical decisions in XML.

    Science.gov (United States)

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  14. Juwele in XML

    OpenAIRE

    Habekost, Engelbert

    2005-01-01

    In der Forschungsabteilung der Humboldt-Universität wird die Schriftenreihe »Öffentliche Vorlesungen« seit 2002 mit der Software FrameMaker produziert. Verbunden damit war die Umstellung des Produktionsprozesses auf eine XML-basierte Dokumenterstellung sowie die Inhouse-Betreuung der kompletten Druckvorstufe.

  15. Java facilities in processing XML files - JAXB and generating PDF reports

    Directory of Open Access Journals (Sweden)

    Danut-Octavian SIMION

    2008-01-01

    Full Text Available The paper presents the Java programming language facilities in working with XML files using JAXB (The Java Architecture for XML Binding technology and generating PDF reports from XML files using Java objects. The XML file can be an existing one and could contain the data about an entity (Clients for example or it might be the result of a SELECT-SQL statement. JAXB generates JAVA classes through xs rules and a Marshalling, Unmarshalling compiler. The PDF file is build from a XML file and uses XSL-FO formatting file and a Java ResultSet object.

  16. XML Translator for Interface Descriptions

    Science.gov (United States)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  17. XML: How It Will Be Applied to Digital Library Systems.

    Science.gov (United States)

    Kim, Hyun-Hee; Choi, Chang-Seok

    2000-01-01

    Shows how XML is applied to digital library systems. Compares major features of XML with those of HTML and describes an experimental XML-based metadata retrieval system, which is based on the Dublin Core and is designed as a subsystem of the Korean Virtual Library and Information System (VINIS). (Author/LRW)

  18. DICOM supported sofware configuration by XML files

    International Nuclear Information System (INIS)

    LucenaG, Bioing Fabian M; Valdez D, Andres E; Gomez, Maria E; Nasisi, Oscar H

    2007-01-01

    A method for the configuration of informatics systems that provide support to DICOM standards using XML files is proposed. The difference with other proposals is base on that this system does not code the information of a DICOM objects file, but codes the standard itself in an XML file. The development itself is the format for the XML files mentioned, in order that they can support what DICOM normalizes for multiple languages. In this way, the same configuration file (or files) can be use in different systems. Jointly the XML configuration file generated, we wrote also a set of CSS and XSL files. So the same file can be visualized in a standard browser, as a query system of DICOM standard, emerging use, that did not was a main objective but brings a great utility and versatility. We exposed also some uses examples of the configuration file mainly in relation with the load of DICOM information objects. Finally, at the conclusions we show the utility that the system has already provided when the edition of DICOM standard changes from 2006 to 2007

  19. An Extended Role Based Access Control Method for XML Documents

    Institute of Scientific and Technical Information of China (English)

    MENG Xiao-feng; LUO Dao-feng; OU Jian-bo

    2004-01-01

    As XML has been increasingly important as the Data-change format of Internet and Intranet, access-control-on-XML-properties rises as a new issue.Role-based access control (RBAC) is an access control method that has been widely used in Internet, Operation System and Relation Data Base these 10 years.Though RBAC is already relatively mature in the above fields, new problems occur when it is used in XML properties.This paper proposes an integrated model to resolve these problems, after the fully analysis on the features of XML and RBAC.

  20. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  1. CSchema: A Downgrading Policy Language for XML Access Control

    Institute of Scientific and Technical Information of China (English)

    Dong-Xi Liu

    2007-01-01

    The problem of regulating access to XML documents has attracted much attention from both academic and industry communities.In existing approaches, the XML elements specified by access policies are either accessible or inac-cessible according to their sensitivity.However, in some cases, the original XML elements are sensitive and inaccessible, but after being processed in some appropriate ways, the results become insensitive and thus accessible.This paper proposes a policy language to accommodate such cases, which can express the downgrading operations on sensitive data in XML documents through explicit calculations on them.The proposed policy language is called calculation-embedded schema (CSchema), which extends the ordinary schema languages with protection type for protecting sensitive data and specifying downgrading operations.CSchema language has a type system to guarantee the type correctness of the embedded calcula-tion expressions and moreover this type system also generates a security view after type checking a CSchema policy.Access policies specified by CSchema are enforced by a validation procedure, which produces the released documents containing only the accessible data by validating the protected documents against CSchema policies.These released documents are then ready tobe accessed by, for instance, XML query engines.By incorporating this validation procedure, other XML processing technologies can use CSchema as the access control module.

  2. An introduction to XML query processing and keyword search

    CERN Document Server

    Lu, Jiaheng

    2013-01-01

    This book systematically and comprehensively covers the latest advances in XML data searching. It presents an extensive overview of the current query processing and keyword search techniques on XML data.

  3. Intermediate Levels of Visual Processing

    National Research Council Canada - National Science Library

    Nakayama, Ken

    1998-01-01

    ...) surface representation, here we have shown that there is an intermediate level of visual processing, between the analysis of the image and higher order representations related to specific objects; (2...

  4. Application of XML in real-time data warehouse

    Science.gov (United States)

    Zhao, Yanhong; Wang, Beizhan; Liu, Lizhao; Ye, Su

    2009-07-01

    At present, XML is one of the most widely-used technologies of data-describing and data-exchanging, and the needs for real-time data make real-time data warehouse a popular area in the research of data warehouse. What effects can we have if we apply XML technology to the research of real-time data warehouse? XML technology solves many technologic problems which are impossible to be addressed in traditional real-time data warehouse, and realize the integration of OLAP (On-line Analytical Processing) and OLTP (Online transaction processing) environment. Then real-time data warehouse can truly be called "real time".

  5. Realization Of Algebraic Processor For XML Documents Processing

    International Nuclear Information System (INIS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2010-01-01

    In this paper, are presented some possibilities concerning the implementation of an algebraic method for XML hierarchical data processing which makes faster the XML search mechanism. Here is offered a different point of view for creation of advanced algebraic processor (with all necessary software tools and programming modules respectively). Therefore, this nontraditional approach for fast XML navigation with the presented algebraic processor may help to build an easier user-friendly interface provided XML transformations, which can avoid the difficulties in the complicated language constructions of XSL, XSLT and XPath. This approach allows comparatively simple search of XML hierarchical data by means of the following types of functions: specification functions and so named build-in functions. The choice of programming language Java may appear strange at first, but it isn't when you consider that the applications can run on different kinds of computers. The specific search mechanism based on the linear algebra theory is faster in comparison with MSXML parsers (on the basis of the developed examples with about 30%). Actually, there exists the possibility for creating new software tools based on the linear algebra theory, which cover the whole navigation and search techniques characterizing XSLT/XPath. The proposed method is able to replace more complicated operations in other SOA components.

  6. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  7. Constructing an XML database of linguistics data

    Directory of Open Access Journals (Sweden)

    J H Kroeze

    2010-04-01

    Full Text Available A language-oriented, multi-dimensional database of the linguistic characteristics of the Hebrew text of the Old Testament can enable researchers to do ad hoc queries. XML is a suitable technology to transform free text into a database. A clause’s word order can be kept intact while other features such as syntactic and semantic functions can be marked as elements or attributes. The elements or attributes from the XML “database” can be accessed and proces sed by a 4th generation programming language, such as Visual Basic. XML is explored as an option to build an exploitable database of linguistic data by representing inherently multi-dimensional data, including syntactic and semantic analyses of free text.

  8. A Runtime System for XML Transformations in Java

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Kirkegaard, Christian; Møller, Anders

    2004-01-01

    We show that it is possible to extend a general-purpose programming language with a convenient high-level data-type for manipulating XML documents while permitting (1) precise static analysis for guaranteeing validity of the constructed XML documents relative to the given DTD schemas, and (2...

  9. A Layered View Model for XML Repositories and XML Data Warehouses

    NARCIS (Netherlands)

    Rajugan, R.; Chang, E.; Dillon, T.; Feng, L.

    The Object-Oriented (OO) conceptual models have the power in describing and modeling real-world data semantics and their inter-relationships in a form that is precise and comprehensible to users. Conversely, XML is fast emerging as the dominant standard for storing, describing and interchanging data

  10. Generando datos XML a partir de bases de datos relacionales

    OpenAIRE

    Migani, Silvina; Correa, Carlos; Vera, Cristina; Romera, Liliana

    2012-01-01

    El lenguaje XML, los lenguajes que permiten manipular datos XML, y su impacto en el mundo de las bases de datos, es el área donde este proyecto se desarrolla. Surge como una iniciativa de docentes del área bases de datos, con la finalidad de profundizar en el estudio de XML y experimentar motores de bases de datos que lo soportan.

  11. Modeling business objects with XML schema

    CERN Document Server

    Daum, Berthold

    2003-01-01

    XML Schema is the new language standard from the W3C and the new foundation for defining data in Web-based systems. There is a wealth of information available about Schemas but very little understanding of how to use this highly formal specification for creating documents. Grasping the power of Schemas means going back to the basics of documents themselves, and the semantic rules, or grammars, that define them. Written for schema designers, system architects, programmers, and document authors, Modeling Business Objects with XML Schema guides you through understanding Schemas from the basic concepts, type systems, type derivation, inheritance, namespace handling, through advanced concepts in schema design.*Reviews basic XML syntax and the Schema recommendation in detail.*Builds a knowledge base model step by step (about jazz music) that is used throughout the book.*Discusses Schema design in large environments, best practice design patterns, and Schema''s relation to object-oriented concepts.

  12. XML Based Scientific Data Management Facility

    Science.gov (United States)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  13. Interpreting XML documents via an RDF schema ontology

    NARCIS (Netherlands)

    Klein, Michel

    2002-01-01

    Many business documents are represented in XML. However XML only describes the structure of data, not its meaning. The meaning of data is required for advanced automated processing, as is envisaged in the "Semantic Web". Ontologies are often used to describe the meaning of data items. Many ontology

  14. ECG and XML: an instance of a possible XML schema for the ECG telemonitoring.

    Science.gov (United States)

    Di Giacomo, Paola; Ricci, Fabrizio L

    2005-03-01

    Management of many types of chronic diseases relies heavily on patients' self-monitoring of their disease conditions. In recent years, Internet-based home telemonitoring systems allowing transmission of patient data to a central database and offering immediate access to the data by the care providers have become available. The adoption of Extensible Mark-up Language (XML) as a W3C standard has generated considerable interest in the potential value of this language in health informatics. However, the telemonitoring systems often work with only one or a few types of medical devices. This is because different medical devices produce different types of data, and the existing telemonitoring systems are generally built around a proprietary data schema. In this paper, we describe a generic data schema for a telemonitoring system that is applicable to different types of medical devices and different diseases, and then we present an architecture for the exchange of clinical information as data, signals of telemonitoring and clinical reports in the XML standard, up-to-date information in each electronic patient record and integration in real time with the information collected during the telemonitoring activities in the XML schema, between all the structures involved in the healthcare process of the patient.

  15. Algebra-Based Optimization of XML-Extended OLAP Queries

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    In today’s OLAP systems, integrating fast changing data physically into a cube is complex and time-consuming. Our solution, the “OLAP-XML Federation System,” makes it possible to reference the fast changing data in XML format in OLAP queries without physical integration. In this paper, we introduce...

  16. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    Science.gov (United States)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  17. Achieving Adaptivity For OLAP-XML Federations

    DEFF Research Database (Denmark)

    Pedersen, D.; Pedersen, Torben Bach

    2003-01-01

    Motivated by the need for more flexible OLAP systems, this paper presents the results of work on logical integration of external data in OLAP databases, carried out in cooperation between the Danish OLAP client vendor \\targit and Aalborg University. Flexibility is ensured by supporting XML......'s ability to adapt to changes in its surroundings. This paper describes the potential problems that may interrupt the operation of the integration system, in particular those caused by the often autonomous and unreliable nature of external XML data sources, and methods for handling these problems...

  18. Work orders management based on XML file in printing

    Directory of Open Access Journals (Sweden)

    Ran Peipei

    2018-01-01

    Full Text Available The Extensible Markup Language (XML technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.

  19. XML as a standard I/O data format in scientific software development

    International Nuclear Information System (INIS)

    Song Tianming; Yang Jiamin; Yi Rongqing

    2010-01-01

    XML is an open standard data format with strict syntax rules, which is widely used in large-scale software development. It is adopted as I/O file format in the development of SpectroSim, a simulation and data-processing system for soft x-ray spectrometer used in ICF experiments. XML data that describe spectrometer configurations, schema codes that define syntax rules for XML and report generation technique for visualization of XML data are introduced. The characteristics of XML such as the capability to express structured information, self-descriptive feature, automation of visualization are explained with examples, and its feasibility as a standard scientific I/O data file format is discussed. (authors)

  20. XML Schema Guide for Primary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  1. Adaptive Hypermedia Educational System Based on XML Technologies.

    Science.gov (United States)

    Baek, Yeongtae; Wang, Changjong; Lee, Sehoon

    This paper proposes an adaptive hypermedia educational system using XML technologies, such as XML, XSL, XSLT, and XLink. Adaptive systems are capable of altering the presentation of the content of the hypermedia on the basis of a dynamic understanding of the individual user. The user profile can be collected in a user model, while the knowledge…

  2. Using XML to Separate Content from the Presentation Software in eLearning Applications

    Science.gov (United States)

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  3. An XML-Based Protocol for Distributed Event Services

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the application of an XML (extensible mark-up language)-based protocol to the developing field of distributed processing by way of a computational grid which resembles an electric power grid. XML tags would be used to transmit events between the participants of a transaction, namely, the consumer and the producer of the grid scheme.

  4. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  5. Overview of the INEX 2008 XML Mining Track

    Science.gov (United States)

    Denoyer, Ludovic; Gallinari, Patrick

    We describe here the XML Mining Track at INEX 2008. This track was launched for exploring two main ideas: first identifying key problems for mining semi-structured documents and new challenges of this emerging field and second studying and assessing the potential of machine learning techniques for dealing with generic Machine Learning (ML) tasks in the structured domain i.e. classification and clustering of semi structured documents. This year, the track focuses on the supervised classification and the unsupervised clustering of XML documents using link information. We consider a corpus of about 100,000 Wikipedia pages with the associated hyperlinks. The participants have developed models using the content information, the internal structure information of the XML documents and also the link information between documents.

  6. Extending the Intermediate Data Structure (IDS for longitudinal historical databases to include geographic data

    Directory of Open Access Journals (Sweden)

    Finn Hedefalk

    2014-09-01

    Full Text Available The Intermediate Data Structure (IDS is a standardised database structure for longitudinal historical databases. Such a common structure facilitates data sharing and comparative research. In this study, we propose an extended version of IDS, named IDS-Geo, that also includes geographic data. The geographic data that will be stored in IDS-Geo are primarily buildings and/or property units, and the purpose of these geographic data is mainly to link individuals to places in space. When we want to assign such detailed spatial locations to individuals (in times before there were any detailed house addresses available, we often have to create tailored geographic datasets. In those cases, there are benefits of storing geographic data in the same structure as the demographic data. Moreover, we propose the export of data from IDS-Geo using an eXtensible Markup Language (XML Schema. IDS-Geo is implemented in a case study using historical property units, for the period 1804 to 1913, stored in a geographically extended version of the Scanian Economic Demographic Database (SEDD. To fit into the IDS-Geo data structure, we included an object lifeline representation of all of the property units (based on the snapshot time representation of single historical maps and poll-tax registers. The case study verifies that the IDS-Geo model is capable of handling geographic data that can be linked to demographic data.

  7. Streaming-based verification of XML signatures in SOAP messages

    DEFF Research Database (Denmark)

    Somorovsky, Juraj; Jensen, Meiko; Schwenk, Jörg

    2010-01-01

    approach for XML processing, the Web Services servers easily become a target of Denial-of-Service attacks. We present a solution for these problems: an external streaming-based WS-Security Gateway. Our implementation is capable of processing XML Signatures in SOAP messages using a streaming-based approach...

  8. The Format Converting/Transfer Agent and Repository System based on ebXML

    Directory of Open Access Journals (Sweden)

    KyeongRim Ahn

    2004-12-01

    Full Text Available As introducing XML in EC-environment, various document formats have been used due to XML characteristic. Also, other document format except XML have been used to exchange EC-related information. That is, as increasing trading partner, as increasing exchanged document format and business processing being complex. So, management difficulty and duplication problem happened as trading partners increasing. And, they want to change plural business workflow to general and uniform form as defining and arranging BP(Business Process. Therefore, in this paper, we define XML as future document standard agreement and discuss about service system architecture and Repository. Repository stores and manages document standard, information related to Business Processing, Messaging Profile, and so on. Repository structure is designed to cover various XML standards. Also, we design system to support ebXML communication protocol, MSH, as well as traditional communication protocol, such as X.25, X.400, etc. and implement to exchange information via FTP.

  9. XML-based analysis interface for particle physics data analysis

    International Nuclear Information System (INIS)

    Hu Jifeng; Lu Xiaorui; Zhang Yangheng

    2011-01-01

    The letter emphasizes on an XML-based interface and its framework for particle physics data analysis. The interface uses a concise XML syntax to describe, in data analysis, the basic tasks: event-selection, kinematic fitting, particle identification, etc. and a basic processing logic: the next step goes on if and only if this step succeeds. The framework can perform an analysis without compiling by loading the XML-interface file, setting p in run-time and running dynamically. An analysis coding in XML instead of C++, easy-to-understood arid use, effectively reduces the work load, and enables users to carry out their analyses quickly. The framework has been developed on the BESⅢ offline software system (BOSS) with the object-oriented C++ programming. These functions, required by the regular tasks and the basic processing logic, are implemented with both standard modules or inherited from the modules in BOSS. The interface and its framework have been tested to perform physics analysis. (authors)

  10. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  11. XML Schema Guide for Secondary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  12. Simulation framework and XML detector description for the CMS experiment

    CERN Document Server

    Arce, P; Boccali, T; Case, M; de Roeck, A; Lara, V; Liendl, M; Nikitenko, A N; Schröder, M; Strässner, A; Wellisch, H P; Wenzel, H

    2003-01-01

    Currently CMS event simulation is based on GEANT3 while the detector description is built from different sources for simulation and reconstruction. A new simulation framework based on GEANT4 is under development. A full description of the detector is available, and the tuning of the GEANT4 performance and the checking of the ability of the physics processes to describe the detector response is ongoing. Its integration on the CMS mass production system and GRID is also currently under development. The Detector Description Database project aims at providing a common source of information for Simulation, Reconstruction, Analysis, and Visualisation, while allowing for different representations as well as specific information for each application. A functional prototype, based on XML, is already released. Also examples of the integration of DDD in the GEANT4 simulation and in the reconstruction applications are provided.

  13. Integrity Checking and Maintenance with Active Rules in XML Databases

    DEFF Research Database (Denmark)

    Christiansen, Henning; Rekouts, Maria

    2007-01-01

    While specification languages for integrity constraints for XML data have been considered in the literature, actual technologies and methodologies for checking and maintaining integrity are still in their infancy. Triggers, or active rules, which are widely used in previous technologies for the p...... updates, the method indicates trigger conditions and correctness criteria to be met by the trigger code supplied by a developer or possibly automatic methods. We show examples developed in the Sedna XML database system which provides a running implementation of XML triggers....

  14. Web-based infectious disease reporting using XML forms.

    Science.gov (United States)

    Liu, Danhong; Wang, Xia; Pan, Feng; Xu, Yongyong; Yang, Peng; Rao, Keqin

    2008-09-01

    Exploring solutions for infectious disease information sharing among hospital and public health information systems is imperative to the improvement of disease surveillance and emergent response. This paper aimed at developing a method to directly transmit real-time data of notifiable infectious diseases from hospital information systems to public health information systems on the Internet by using a standard eXtensible Markup Language (XML) format. The mechanism and work flow by which notifiable infectious disease data are created, reported and used at health agencies in China was evaluated. The capacity of all participating providers to use electronic data interchange to submit transactions of data required for the notifiable infectious disease reporting was assessed. The minimum data set at national level that is required for reporting for national notifiable infectious disease surveillance was determined. The standards and techniques available worldwide for electronic health data interchange, such as XML, HL7 messaging, CDA and ATSM CCR, etc. were reviewed and compared, and an XML implementation format needed for this purpose was defined for hospitals that are able to access the Internet to provide a complete infectious disease reporting. There are 18,703 county or city hospitals in China. All of them have access to basic information infrastructures including computers, e-mail and the Internet. Nearly 10,000 hospitals possess hospital information systems used for electronically recording, retrieving and manipulating patients' information. These systems collect 23 data items required in the minimum data set for national notifiable infectious disease reporting. In order to transmit these data items to the disease surveillance system and local health information systems instantly and without duplication of data input, an XML schema and a set of standard data elements were developed to define the content, structure and semantics of the data set. These standards

  15. An Object-Oriented Approach of Keyword Querying over Fuzzy XML

    Directory of Open Access Journals (Sweden)

    Ting Li

    2016-09-01

    Full Text Available As the fuzzy data management has become one of the main research topics and directions, the question of how to obtain the useful information by means of keyword query from fuzzy XML documents is becoming a subject of an increasing needed investigation. Considering the keyword query methods on crisp XML documents, smallest lowest common ancestor (SLCA semantics is one of the most widely accepted semantics. When users propose the keyword query on fuzzy XML documents with the SLCA semantics, the query results are always incomplate, with low precision, and with no possibilities values returned. Most of keyword query semantics on XML documents only consider query results matching all keywords, yet users may also be interested in the query results matching partial keywords. To overcome these limitations, in this paper, we investigate how to obtain more comprehensive and meaningful results of keyword querying on fuzzy XML documents. We propose a semantics of object-oriented keyword querying on fuzzy XML documents. First, we introduce the concept of "object tree", analyze different types of matching result object trees and find the "minimum result object trees" which contain all keywords and "result object trees" which contain partial keywords. Then an object-oriented keyword query algorithm ROstack is proposed to obtain the root nodes of these matching result object trees, together with their possibilities. At last, experiments are conducted to verify the effectiveness and efficiency of our proposed algorithm.

  16. XML as a format of expression of Object-Oriented Petri Nets

    Directory of Open Access Journals (Sweden)

    Petr Jedlička

    2004-01-01

    Full Text Available A number of object-oriented (OO variants have so far been devised for Petri Nets (PN. However, none of these variants has ever been described using an open, independent format – such as XML. This article suggests several possibilities and advantages of such a description. The outlined XML language definition for the description of object-oriented Petri Nets (OOPN is based on XMI (description of UML object-oriented models, SOX (simple description of general OO systems and PNML (an XML-based language used for the description of structured and modular PN. For OOPN, the XML form of description represents a standard format for storing as well as for transfer between various OOPN-processing (analysis, simulation, ... tools.

  17. A Typed Text Retrieval Query Language for XML Documents.

    Science.gov (United States)

    Colazzo, Dario; Sartiani, Carlo; Albano, Antonio; Manghi, Paolo; Ghelli, Giorgio; Lini, Luca; Paoli, Michele

    2002-01-01

    Discussion of XML focuses on a description of Tequyla-TX, a typed text retrieval query language for XML documents that can search on both content and structures. Highlights include motivations; numerous examples; word-based and char-based searches; tag-dependent full-text searches; text normalization; query algebra; data models and term language;…

  18. Lessons in scientific data interoperability: XML and the eMinerals project.

    Science.gov (United States)

    White, T O H; Bruin, R P; Chiang, G-T; Dove, M T; Tyer, R P; Walker, A M

    2009-03-13

    A collaborative environmental eScience project produces a broad range of data, notable as much for its diversity, in source and format, as its quantity. We find that extensible markup language (XML) and associated technologies are invaluable in managing this deluge of data. We describe Fo X, a toolkit for allowing Fortran codes to read and write XML, thus allowing existing scientific tools to be easily re-used in an XML-centric workflow.

  19. An XML-based communication protocol for accelerator distributed controls

    International Nuclear Information System (INIS)

    Catani, L.

    2008-01-01

    This paper presents the development of XMLvRPC, an RPC-like communication protocol based, for this particular application, on the TCP/IP and XML (eXtensible Markup Language) tools built-in in LabVIEW. XML is used to format commands and data passed between client and server while socket interface for communication uses either TCP or UDP transmission protocols. This implementation extends the features of these general purpose libraries and incorporates solutions that might provide, with limited modifications, full compatibility with well established and more general communication protocol, i.e. XML-RPC, while preserving portability to different platforms supported by LabVIEW. The XMLvRPC suite of software has been equipped with specific tools for its deployment in distributed control systems as, for instance, a quasi-automatic configuration and registration of the distributed components and a simple plug-and-play approach to the installation of new services. Key feature is the management of large binary arrays that allow coding of large binary data set, e.g. raw images, more efficiently with respect to the standard XML coding

  20. An XML-based communication protocol for accelerator distributed controls

    Energy Technology Data Exchange (ETDEWEB)

    Catani, L. [INFN-Roma Tor Vergata, Rome (Italy)], E-mail: luciano.catani@roma2.infn.it

    2008-03-01

    This paper presents the development of XMLvRPC, an RPC-like communication protocol based, for this particular application, on the TCP/IP and XML (eXtensible Markup Language) tools built-in in LabVIEW. XML is used to format commands and data passed between client and server while socket interface for communication uses either TCP or UDP transmission protocols. This implementation extends the features of these general purpose libraries and incorporates solutions that might provide, with limited modifications, full compatibility with well established and more general communication protocol, i.e. XML-RPC, while preserving portability to different platforms supported by LabVIEW. The XMLvRPC suite of software has been equipped with specific tools for its deployment in distributed control systems as, for instance, a quasi-automatic configuration and registration of the distributed components and a simple plug-and-play approach to the installation of new services. Key feature is the management of large binary arrays that allow coding of large binary data set, e.g. raw images, more efficiently with respect to the standard XML coding.

  1. Health Topic XML File Description

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xmldescription.html Health Topic XML File Description: MedlinePlus To use the sharing ... information categories assigned. Example of a Full Health Topic Record A record for a MedlinePlus health topic ...

  2. XBRL: Beyond Basic XML

    Science.gov (United States)

    VanLengen, Craig Alan

    2010-01-01

    The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…

  3. EquiX-A Search and Query Language for XML.

    Science.gov (United States)

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  4. Semi-automatic Citation Correction with Lemon8-XML

    Directory of Open Access Journals (Sweden)

    MJ Suhonos

    2009-03-01

    Full Text Available The Lemon8-XML software application, developed by the Public Knowledge Project (PKP, provides an open-source, computer-assisted interface for reliable citation structuring and validation. Lemon8-XML combines citation parsing algorithms with freely-available online indexes such as PubMed, WorldCat, and OAIster. Fully-automated markup of entire bibliographies may be a genuine possibility using this approach. Automated markup of citations would increase bibliographic accuracy while reducing copyediting demands.

  5. An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results

    Directory of Open Access Journals (Sweden)

    Arthur John

    2004-01-01

    Full Text Available Abstract Background Many proteomics initiatives require a seamless bioinformatics integration of a range of analytical steps between sample collection and systems modeling immediately assessable to the participants involved in the process. Proteomics profiling by 2D gel electrophoresis to the putative identification of differentially expressed proteins by comparison of mass spectrometry results with reference databases, includes many components of sample processing, not just analysis and interpretation, are regularly revisited and updated. In order for such updates and dissemination of data, a suitable data structure is needed. However, there are no such data structures currently available for the storing of data for multiple gels generated through a single proteomic experiments in a single XML file. This paper proposes a data structure based on XML standards to fill the void that exists between data generated by proteomics experiments and storing of data. Results In order to address the resulting procedural fluidity we have adopted and implemented a data model centered on the concept of annotated gel (AG as the format for delivery and management of 2D Gel electrophoresis results. An eXtensible Markup Language (XML schema is proposed to manage, analyze and disseminate annotated 2D Gel electrophoresis results. The structure of AG objects is formally represented using XML, resulting in the definition of the AGML syntax presented here. Conclusion The proposed schema accommodates data on the electrophoresis results as well as the mass-spectrometry analysis of selected gel spots. A web-based software library is being developed to handle data storage, analysis and graphic representation. Computational tools described will be made available at http://bioinformatics.musc.edu/agml. Our development of AGML provides a simple data structure for storing 2D gel electrophoresis data.

  6. An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results.

    Science.gov (United States)

    Stanislaus, Romesh; Jiang, Liu Hong; Swartz, Martha; Arthur, John; Almeida, Jonas S

    2004-01-29

    Many proteomics initiatives require a seamless bioinformatics integration of a range of analytical steps between sample collection and systems modeling immediately assessable to the participants involved in the process. Proteomics profiling by 2D gel electrophoresis to the putative identification of differentially expressed proteins by comparison of mass spectrometry results with reference databases, includes many components of sample processing, not just analysis and interpretation, are regularly revisited and updated. In order for such updates and dissemination of data, a suitable data structure is needed. However, there are no such data structures currently available for the storing of data for multiple gels generated through a single proteomic experiments in a single XML file. This paper proposes a data structure based on XML standards to fill the void that exists between data generated by proteomics experiments and storing of data. In order to address the resulting procedural fluidity we have adopted and implemented a data model centered on the concept of annotated gel (AG) as the format for delivery and management of 2D Gel electrophoresis results. An eXtensible Markup Language (XML) schema is proposed to manage, analyze and disseminate annotated 2D Gel electrophoresis results. The structure of AG objects is formally represented using XML, resulting in the definition of the AGML syntax presented here. The proposed schema accommodates data on the electrophoresis results as well as the mass-spectrometry analysis of selected gel spots. A web-based software library is being developed to handle data storage, analysis and graphic representation. Computational tools described will be made available at http://bioinformatics.musc.edu/agml. Our development of AGML provides a simple data structure for storing 2D gel electrophoresis data.

  7. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    Science.gov (United States)

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  8. Data Hiding and Security for XML Database: A TRBAC- Based Approach

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wan-song; SUN Wei; LIU Da-xin

    2005-01-01

    In order to cope with varying protection granularity levels of XML (eXtensible Markup Language) documents, we propose a TXAC (Two-level XML Access Control) framework, in which an extended TRBAC (Temporal Role-Based Access Control) approach is proposed to deal with the dynamic XML data. With different system components,TXAC algorithm evaluates access requests efficiently by appropriate access control policy in dynamic web environment.The method is a flexible and powerful security system offering a multi-level access control solution.

  9. A generic framework for extracting XML data from legacy databases

    NARCIS (Netherlands)

    Thiran, Ph.; Estiévenart, F.; Hainaut, J.L.; Houben, G.J.P.M.

    2005-01-01

    This paper describes a generic framework of which semantics-based XML data can be derived from legacy databases. It consists in first recovering the conceptual schema of the database through reverse engineering techniques, and then in converting this schema, or part of it, into XML-compliant data

  10. Engineering XML Solutions Using Views

    NARCIS (Netherlands)

    Rajugan, R.; Chang, E.; Dillon, T.S.; Feng, L.

    In industrial informatics, engineering data intensive Enterprise Information Systems (EIS) is a challenging task without abstraction and partitioning. Further, the introduction of semi-structured data (namely XML) and its rapid adaptation by the commercial and industrial systems increased the

  11. Evaluating XML-Extended OLAP Queries Based on a Physical Algebra

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    . In this paper, we extend previous work on the logical federation of OLAP and XML data sources by presenting a simplified query semantics, a physical query algebra and a robust OLAP-XML query engine as well as the query evaluation techniques. Performance experiments with a prototypical implementation suggest...

  12. Evaluating XML-Extended OLAP Queries Based on a Physical Algebra

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2004-01-01

    is desirable. In this paper, we extend previous work on the logical federation of OLAP and XML data sources by presenting a simplified query semantics,a physical query algebra and a robust OLAP-XML query engine.Performance experiments with a prototypical implementation suggest that the performance for OLAP...

  13. A Study of XML in the Library Science Curriculum in Taiwan and South East Asia

    Science.gov (United States)

    Chang, Naicheng; Huang, Yuhui; Hopkinson, Alan

    2011-01-01

    This paper aims to investigate the current XML-related courses available in 96 LIS schools in South East Asia and Taiwan's 9 LIS schools. Also, this study investigates the linkage of library school graduates in Taiwan who took different levels of XML-related education (that is XML arranged as an individual course or XML arranged as a section unit…

  14. Applying Analogical Reasoning Techniques for Teaching XML Document Querying Skills in Database Classes

    Science.gov (United States)

    Mitri, Michel

    2012-01-01

    XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…

  15. Methods and Technologies of XML Data Modeling for IP Mode Intelligent Measuring and Controlling System

    International Nuclear Information System (INIS)

    Liu, G X; Hong, X B; Liu, J G

    2006-01-01

    This paper presents the IP mode intelligent measuring and controlling system (IMIMCS). Based on object-oriented modeling technology of UML and XML Schema, the innovative methods and technologies of some key problems for XML data modeling in the IMIMCS were especially discussed, including refinement for systemic business by means of use-case diagram of UML, the confirmation of the content of XML data model and logic relationship of the objects of XML Schema with the aid of class diagram of UML, the mapping rules from the UML object model to XML Schema. Finally, the application of the IMIMCS based on XML for a modern greenhouse was presented. The results show that the modeling methods of the measuring and controlling data in the IMIMCS involving the multi-layer structure and many operating systems process strong reliability and flexibility, guarantee uniformity of complex XML documents and meet the requirement of data communication across platform

  16. Enterprise Architecture Analysis with XML

    NARCIS (Netherlands)

    F.S. de Boer (Frank); M.M. Bonsangue (Marcello); J.F. Jacob (Joost); A. Stam; L.W.N. van der Torre (Leon)

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural

  17. Static Analysis for Dynamic XML

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Møller, Anders; Schwartzbach, Michael Ignatieff

    2002-01-01

    We describe the summary graph lattice for dataflow analysis of programs that dynamically construct XML documents. Summary graphs have successfully been used to provide static guarantees in the JWIG language for programming interactive Web services. In particular, the JWIG compiler is able to check...

  18. Representing nested semantic information in a linear string of text using XML.

    OpenAIRE

    Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a N...

  19. The Design Space of Type Checkers for XML Transformation Languages

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    We survey work on statically type checking XML transformations, covering a wide range of notations and ambitions. The concept of type may vary from idealizations of DTD to full-blown XML Schema or even more expressive formalisms. The notion of transformation may vary from clean and simple...... transductions to domain-specific languages or integration of XML in general-purpose programming languages. Type annotations can be either explicit or implicit, and type checking ranges from exact decidability to pragmatic approximations. We characterize and evaluate existing tools in this design space......, including a recent result of the authors providing practical type checking of full unannotated XSLT 1.0 stylesheets given general DTDs that describe the input and output languages....

  20. Managing XML Data to optimize Performance into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-06-01

    Full Text Available This paper propose some possibilities for manage XML data in order to optimize performance into object-relational databases. It is detailed the possibility of storing XML data into such databases, using for exemplification an Oracle database and there are tested some optimizing techniques of the queries over XMLType tables, like indexing and partitioning tables.

  1. Adding XML to the MIS Curriculum: Lessons from the Classroom

    Science.gov (United States)

    Wagner, William P.; Pant, Vik; Hilken, Ralph

    2008-01-01

    eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…

  2. The XBabelPhish MAGE-ML and XML translator.

    Science.gov (United States)

    Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A

    2008-01-18

    MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.

  3. Scripting XML with Generic Haskell

    NARCIS (Netherlands)

    Atanassow, F.; Clarke, D.; Jeuring, J.T.

    2003-01-01

    A generic program is written once and works on values of many data types. Generic Haskell is a recent extension of the functional programming language Haskell that supports generic programming. This paper discusses how Generic Haskell can be used to implement XML tools whose behaviour depends on

  4. Scripting XML with Generic Haskell

    NARCIS (Netherlands)

    Atanassow, F.; Clarke, D.; Jeuring, J.T.

    2007-01-01

    A generic program is written once and works on values of many data types. Generic Haskell is a recent extension of the functional programming language Haskell that supports generic programming. This paper discusses how Generic Haskell can be used to implement XML tools whose behaviour depends on

  5. Single event monitoring system based on Java 3D and XML data binding

    International Nuclear Information System (INIS)

    Wang Liang; Chinese Academy of Sciences, Beijing; Zhu Kejun; Zhao Jingwei

    2007-01-01

    Online single event monitoring is important to BESIII DAQ System. Java3D is extension of Java Language in 3D technology, XML data binding is more efficient to handle XML document than SAX and DOM. This paper mainly introduce the implementation of BESIII single event monitoring system with Java3D and XML data binding, and interface for track fitting software with JNI technology. (authors)

  6. XML in an Adaptive Framework for Instrument Control

    Science.gov (United States)

    Ames, Troy J.

    2004-01-01

    NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.

  7. SimpleTimeseries: Towards a Standard Representation of Astronomical Time-Series

    Science.gov (United States)

    Brewer, John Michael; Bloom, J. S.; Starr, D.

    2010-01-01

    Centuries of astrophysical data will soon be eclipsed by the unprecedented number of novel events regularly captured by large scale synoptic surveys. In the past, a stately accumulation of data could await inclusion in catalogs. More recently, digital catalogs have been placed on websites, or forwarded in e-mails. Fully exploiting the science opportunities of this new era will require much more rapid and standardized data exchange. With abundant novel sources to choose from, the limited followup resources available will need regularized data formats to help in decision making, whether the ultimate decisions lie with a human or a machine. The Berkeley Transients Classification Pipeline (TCP) has developed an XML based time-series format to exchange data within the context of the Palomar Transients Factory (PTF). The benefit of a standard time-series representation lies in promulgating it beyond just one collaboration and so we are publicly releasing the format, SimpleTimeseries. It is also slated to describe time-series within the the Virtual Observatory's upcoming VOEvent 2.0 specification. An XML based format allows easy processing by both machines and humans. We have put together examples and documentation which show the flexibilty of SimpleTimeseries on dotastro.org, where you can also find the XML schema, and public light curves in the new format.

  8. Representing nested semantic information in a linear string of text using XML.

    Science.gov (United States)

    Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.

  9. Design and implementation of an XML based object-oriented detector description database for CMS

    International Nuclear Information System (INIS)

    Liendl, M.

    2003-04-01

    This thesis deals with the development of a detector description database (DDD) for the compact muon solenoid (CMS) experiment at the large hadron collider (LHC) located at the European organization for nuclear research (CERN). DDD is a fundamental part of the CMS offline software with its main applications, simulation and reconstruction. Both are in need of different models of the detector in order to efficiently solve their specific tasks. In the thesis the requirements to a detector description database are analyzed and the chosen solution is described in detail. It comprises the following components: an XML based detector description language, a runtime system that implements an object-oriented transient representation of the detector, and an application programming interface to be used by client applications. One of the main aspects of the development is the design of the DDD components. The starting point is a domain model capturing concisely the characteristics of the problem domain. The domain model is transformed into several implementation models according to the guidelines of the model driven architecture (MDA). Implementation models and appropriate refinements thereof are foundation for adequate implementations. Using the MDA approach, a fully functional prototype was realized in C++ and XML. The prototype was successfully tested through seamless integration into both the simulation and the reconstruction framework of CMS. (author)

  10. XML Namespace與RDF的基本概念 | The Basic Concepts of XML Namespace and RDF

    Directory of Open Access Journals (Sweden)

    陳嵩榮 Sung-Jung Chen

    1999-04-01

    Full Text Available

    頁次:88-100

    XML Namespaces機制允許在XML文件中以一個URI 來限定元素名稱或屬性名稱,提供一種在Web上具有唯一性的命名方式,以解決不同的XML文件元素名稱與屬性名稱可能衝突的問題;RDF 主要是為Metadata在Web 上的各種應用提供一個基礎結構,使應用程式之間能夠在Web上交換Metadata,以促進網路資源的自動化處理。本文透過一連串的實例來介紹XML Namespace與RDF的資料模型及語法。

    XML namespaces provide a simple method for qualifying element and attribute names used in XML documents by associating them with namespaces identified by URI references. RDF is a foundation for processing metadata. It provides interoperability between

  11. Generic and updatable XML value indices covering equality and range lookups

    NARCIS (Netherlands)

    E. Sidirourgos (Eleftherios); P.A. Boncz (Peter)

    2008-01-01

    htmlabstractWe describe a collection of indices for XML text, element, and attribute node values that (i) consume little storage, (ii) have low maintenance overhead, (iii) permit fast equi-lookup on string values, and (iv) support range-lookup on any XML typed value (e.g., double, dateTime). The

  12. Generic and Updatable XML Value Indices Covering Equality and Range Lookups

    NARCIS (Netherlands)

    E. Sidirourgos (Eleftherios); P.A. Boncz (Peter)

    2009-01-01

    textabstractWe describe a collection of indices for XML text, element, and attribute node values that (i) consume little storage, (ii) have low maintenance overhead, (iii) permit fast equilookup on string values, and (iv) support range-lookup on any XML typed value (e.g., double, dateTime). The

  13. Progress on an implementation of MIFlowCyt in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  14. PRIDEViewer: a novel user-friendly interface to visualize PRIDE XML files.

    Science.gov (United States)

    Medina-Aunon, J Alberto; Carazo, José M; Albar, Juan Pablo

    2011-01-01

    Current standardization initiatives have greatly contributed to share the information derived by proteomics experiments. One of these initiatives is the XML-based repository PRIDE (PRoteomics IDEntification database), although an XML-based document does not appear to present a user-friendly view at the first glance. PRIDEViewer is a novel Java-based application that presents the information available in a PRIDE XML file in a user-friendly manner, facilitating the interaction among end users as well as the understanding and evaluation of the compiled information. PRIDEViewer is freely available at: http://proteo.cnb.csic.es/prideviewer/. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Secure combination of XML signature application with message aggregation in multicast settings

    DEFF Research Database (Denmark)

    Becker, Andreas; Jensen, Meiko

    2013-01-01

    The similarity-based aggregation of XML documents is a proven method for reducing network traffic. However, when used in conjunction with XML security standards, a lot of pitfalls, but also optimization potentials exist. In this paper, we investigate these issues, showing how to exploit similarity......-based aggregation for rapid distribution of digitally signed XML data. Using our own implementation in two different experimental settings, we provide both a thorough evaluation and a security proof for our approach. By this we prove both feasibility and security, and we illustrate how to achieve a network traffic...

  16. XML-Based Generator of C++ Code for Integration With GUIs

    Science.gov (United States)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  17. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  18. XML schema matching: balancing efficiency and effectiveness by means of clustering

    NARCIS (Netherlands)

    Smiljanic, M.

    2006-01-01

    In this thesis we place our research in the scope of a tool which looks for information within XML data on the Internet. We envision a personal schema querying system which enables a user to express his information need by specifying a personal XML schema. The user can also ask queries over his

  19. A transaction model for XML databases

    NARCIS (Netherlands)

    Dekeyser, S.; Hidders, A.J.H.; Paredaens, J.

    2004-01-01

    Abstract The hierarchical and semistructured nature of XML data may cause complicated update behavior. Updates should not be limited to entire document trees, but should ideally involve subtrees and even individual elements. Providing a suitable scheduling algorithm for semistructured data can

  20. XML-BSPM: an XML format for storing Body Surface Potential Map recordings.

    Science.gov (United States)

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-05-14

    The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats

  1. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    Science.gov (United States)

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  2. 77 FR 46986 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Science.gov (United States)

    2012-08-07

    ... Supplementary Information Section below for details. DATES: The draft XML Schema is now available at the links...] Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema AGENCY: Federal... Regulatory Commission is making available on its Web site ( http://www.ferc.gov ) a draft of the XML schema...

  3. Association rule extraction from XML stream data for wireless sensor networks.

    Science.gov (United States)

    Paik, Juryon; Nam, Junghyun; Kim, Ung Mo; Won, Dongho

    2014-07-18

    With the advances of wireless sensor networks, they yield massive volumes of disparate, dynamic and geographically-distributed and heterogeneous data. The data mining community has attempted to extract knowledge from the huge amount of data that they generate. However, previous mining work in WSNs has focused on supporting simple relational data structures, like one table per network, while there is a need for more complex data structures. This deficiency motivates XML, which is the current de facto format for the data exchange and modeling of a wide variety of data sources over the web, to be used in WSNs in order to encourage the interchangeability of heterogeneous types of sensors and systems. However, mining XML data for WSNs has two challenging issues: one is the endless data flow; and the other is the complex tree structure. In this paper, we present several new definitions and techniques related to association rule mining over XML data streams in WSNs. To the best of our knowledge, this work provides the first approach to mining XML stream data that generates frequent tree items without any redundancy.

  4. An exponentiation method for XML element retrieval.

    Science.gov (United States)

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  5. FireCalc: An XML-based framework for distributed data analysis

    International Nuclear Information System (INIS)

    Duarte, A.S.; Santos, J.H.; Fernandes, H.; Neto, A.; Pereira, T.; Varandas, C.A.F.

    2008-01-01

    Requirements and specifications for Control Data Access and Communication (CODAC) systems in fusion reactors point towards flexible and modular solutions, independent from operating system and computer architecture. These concepts can also be applied to calculation and data analysis systems, where highly standardized solutions must also apply in order to anticipate long time-scales and high technology evolution changes. FireCalc is an analysis tool based on standard Extensible Markup Language (XML) technologies. Actions are described in an XML file, which contains necessary data specifications and the code or references to scripts. This is used by the user to send the analysis code and data to a server, which can be running either locally or remotely. Communications between the user and the server are performed through XML-RPC, an XML based remote procedure call, thus enabling the client and server to be coded in different computer languages. Access to the database, security procedures and calls to the code interpreter are handled through independent modules, which unbinds them from specific solutions. Currently there is an implementation of the FireCalc framework in Java, that uses the Shared Data Access System (SDAS) for accessing the ISTTOK database and the Scilab kernel for the numerical analysis

  6. FireCalc: An XML-based framework for distributed data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, A.S. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais P-1049-001 Lisboa (Portugal)], E-mail: andre.duarte@cfn.ist.utl.pt; Santos, J.H.; Fernandes, H.; Neto, A.; Pereira, T.; Varandas, C.A.F. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais P-1049-001 Lisboa (Portugal)

    2008-04-15

    Requirements and specifications for Control Data Access and Communication (CODAC) systems in fusion reactors point towards flexible and modular solutions, independent from operating system and computer architecture. These concepts can also be applied to calculation and data analysis systems, where highly standardized solutions must also apply in order to anticipate long time-scales and high technology evolution changes. FireCalc is an analysis tool based on standard Extensible Markup Language (XML) technologies. Actions are described in an XML file, which contains necessary data specifications and the code or references to scripts. This is used by the user to send the analysis code and data to a server, which can be running either locally or remotely. Communications between the user and the server are performed through XML-RPC, an XML based remote procedure call, thus enabling the client and server to be coded in different computer languages. Access to the database, security procedures and calls to the code interpreter are handled through independent modules, which unbinds them from specific solutions. Currently there is an implementation of the FireCalc framework in Java, that uses the Shared Data Access System (SDAS) for accessing the ISTTOK database and the Scilab kernel for the numerical analysis.

  7. ForConX: A forcefield conversion tool based on XML.

    Science.gov (United States)

    Lesch, Volker; Diddens, Diddo; Bernardes, Carlos E S; Golub, Benjamin; Dequidt, Alain; Zeindlhofer, Veronika; Sega, Marcello; Schröder, Christian

    2017-04-05

    The force field conversion from one MD program to another one is exhausting and error-prone. Although single conversion tools from one MD program to another exist not every combination and both directions of conversion are available for the favorite MD programs Amber, Charmm, Dl-Poly, Gromacs, and Lammps. We present here a general tool for the force field conversion on the basis of an XML document. The force field is converted to and from this XML structure facilitating the implementation of new MD programs for the conversion. Furthermore, the XML structure is human readable and can be manipulated before continuing the conversion. We report, as testcases, the conversions of topologies for acetonitrile, dimethylformamide, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate comprising also Urey-Bradley and Ryckaert-Bellemans potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. The Simplest Evaluation Measures for XML Information Retrieval that Could Possibly Work

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Mihajlovic, V.

    2005-01-01

    This paper reviews several evaluation measures developed for evaluating XML information retrieval (IR) systems. We argue that these measures, some of which are currently in use by the INitiative for the Evaluation of XML Retrieval (INEX), are complicated, hard to understand, and hard to explain to

  9. Experience in Computer-Assisted XML-Based Modelling in the Context of Libraries

    CERN Document Server

    Niinimäki, M

    2003-01-01

    In this paper, we introduce a software called Meta Data Visualisation (MDV) that (i) assists the user with a graphical user interface in the creation of his specific document types, (ii) creates a database according to these document types, (iii) allows the user to browse the database, and (iv) uses native XML presentation of the data in order to allow queries or data to be exported to other XML-based systems. We illustrate the use of MDV and XML modelling using library-related examples to build a bibliographic database. In our opinion, creating document type descriptions corresponds to conceptual and logical database design in a database design process. We consider that this design can be supported with a suitable set of tools that help the designer concentrate on conceptual issues instead of implementation issues. Our hypothesis is that using the methodology presented in this paper we can create XML databases that are useful and relevant, and with which MDV works as a user interface.

  10. XVCL: XML-based Variant Configuration Language

    DEFF Research Database (Denmark)

    Jarzabek, Stan; Basset, Paul; Zhang, Hongyu

    2003-01-01

    XVCL (XML-based Variant Configuration Language) is a meta-programming technique and tool that provides effective reuse mechanisms. XVCL is an open source software developed at the National University of Singapore. Being a modern and versatile version of Bassett's frames, a technology that has...

  11. Visual word representation in the brain

    NARCIS (Netherlands)

    Ramakrishnan, K.; Groen, I.; Scholte, S.; Smeulders, A.; Ghebreab, S.

    2013-01-01

    The human visual system is thought to use features of intermediate complexity for scene representation. How the brain computationally represents intermediate features is unclear, however. To study this, we tested the Bag of Words (BoW) model in computer vision against human brain activity. This

  12. XML DTD and Schemas for HDF-EOS

    Science.gov (United States)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  13. An Exponentiation Method for XML Element Retrieval

    Science.gov (United States)

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  14. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  15. Internet-based data interchange with XML

    Science.gov (United States)

    Fuerst, Karl; Schmidt, Thomas

    2000-12-01

    In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.

  16. Association Rule Extraction from XML Stream Data for Wireless Sensor Networks

    Science.gov (United States)

    Paik, Juryon; Nam, Junghyun; Kim, Ung Mo; Won, Dongho

    2014-01-01

    With the advances of wireless sensor networks, they yield massive volumes of disparate, dynamic and geographically-distributed and heterogeneous data. The data mining community has attempted to extract knowledge from the huge amount of data that they generate. However, previous mining work in WSNs has focused on supporting simple relational data structures, like one table per network, while there is a need for more complex data structures. This deficiency motivates XML, which is the current de facto format for the data exchange and modeling of a wide variety of data sources over the web, to be used in WSNs in order to encourage the interchangeability of heterogeneous types of sensors and systems. However, mining XML data for WSNs has two challenging issues: one is the endless data flow; and the other is the complex tree structure. In this paper, we present several new definitions and techniques related to association rule mining over XML data streams in WSNs. To the best of our knowledge, this work provides the first approach to mining XML stream data that generates frequent tree items without any redundancy. PMID:25046017

  17. Rock.XML - Towards a library of rock physics models

    Science.gov (United States)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  18. XML como medio de normalización y desarrollo documental.

    Directory of Open Access Journals (Sweden)

    de la Rosa, Antonio

    1999-12-01

    Full Text Available The Web, as a working environment for information science professionals, demands the exploitation of new tools. These tools are intended to allow the information management in a structured and organised way. XML and its specifications offer a wide range of solutions for the problems of our domain: either for the development of documentary software or the day-to-day tasks. In this article, the XML standard is briefly presented and its possible impact in the profession is evaluated as well as the possibilities to use it as vehicle for the creation of information systems.

    El Web, como entorno de trabajo para los profesionales de la documentación, requiere la utilización de nuevas herramientas que permitan gestionar la información de forma estructurada y organizada. XML y las especificaciones que se derivan de él ofrecen una amplia gama de soluciones a los diversos problemas que atañen a nuestra disciplina, tanto para el desarrollo de software documental como para las tareas cotidianas. En este artículo se presenta brevemente la norma XML y se evalúa su posible impacto en la profesión así como las posibilidades de utilizarlo como vehículo para la creación de sistemas de información.

  19. Integrating personalized medical test contents with XML and XSL-FO.

    Science.gov (United States)

    Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas

    2011-03-01

    In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.

  20. Using XML technology for the ontology-based semantic integration of life science databases.

    Science.gov (United States)

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  1. Efficient XML Interchange (EXI) Compression and Performance Benefits: Development, Implementation and Evaluation

    Science.gov (United States)

    2010-03-01

    181 a. Information Grammar Theory ( Chomsky ) ..........................181 b. Events...document. a. Information Grammar Theory ( Chomsky ) Both grammars and events are learned for each XML document by means of a supporting schema or by...processing the XML document. The learning process is similar to Chomsky grammars, a hierarchical-based formal grammar for defining a language

  2. A new XML-aware compression technique for improving performance of healthcare information systems over hospital networks.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Most organizations exchange, collect, store and process data over the Internet. Many hospital networks deploy Web services to send and receive patient information. SOAP (Simple Object Access Protocol) is the most usable communication protocol for Web services. XML is the standard encoding language of SOAP messages. However, the major drawback of XML messages is the high network traffic caused by large overheads. In this paper, two XML-aware compressors are suggested to compress patient messages stemming from any data transactions between Web clients and servers. The proposed compression techniques are based on the XML structure concepts and use both fixed-length and Huffman encoding methods for translating the XML message tree. Experiments show that they outperform all the conventional compression methods and can save tremendous amount of network bandwidth.

  3. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  4. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  5. jMRUI plugin software (jMRUI2XML) to allow automated MRS processing and XML-based standardized output

    Czech Academy of Sciences Publication Activity Database

    Mocioiu, V.; Ortega-Martorell, S.; Olier, I.; Jabłoński, Michal; Starčuková, Jana; Lisboa, P.; Arús, C.; Julia-Sapé, M.

    2015-01-01

    Roč. 28, S1 (2015), S518 ISSN 0968-5243. [ESMRMB 2015. Annual Scientific Meeting /32./. 01.09.2015-03.09.2015, Edinburgh] Institutional support: RVO:68081731 Keywords : MR Spectroscopy * signal processing * jMRUI * software development * XML Subject RIV: BH - Optics, Masers, Lasers

  6. XTCE. XML Telemetry and Command Exchange Tutorial

    Science.gov (United States)

    Rice, Kevin; Kizzort, Brad; Simon, Jerry

    2010-01-01

    An XML Telemetry Command Exchange (XTCE) tutoral oriented towards packets or minor frames is shown. The contents include: 1) The Basics; 2) Describing Telemetry; 3) Describing the Telemetry Format; 4) Commanding; 5) Forgotten Elements; 6) Implementing XTCE; and 7) GovSat.

  7. Transitioning from XML to RDF: Considerations for an effective move towards Linked Data and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Juliet L. Hardesty

    2016-04-01

    Full Text Available Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF. Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.

  8. XSemantic: An Extension of LCA Based XML Semantic Search

    Science.gov (United States)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  9. XML Diagnostics Description Standard

    International Nuclear Information System (INIS)

    Neto, A.; Fernandes, H.; Varandas, C.; Lister, J.; Yonekawa, I.

    2006-01-01

    A standard for the self-description of fusion plasma diagnostics will be presented, based on the Extensible Markup Language (XML). The motivation is to maintain and organise the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money when problems arises. Since there is no existing standard to organise this kind of information, every Association stores and organises each experiment in different ways. This can lead to severe problems when the organisation schema is poorly documented or written in national languages. The exchange of scientists, researchers and engineers between laboratories is a common practice nowadays. Sometimes they have to install new diagnostics or to update existing ones and frequently they lose a great deal of time trying to understand the currently installed system. The most common problems are: no documentation available; the person who understands it has left; documentation written in the national language. Standardisation is the key to solving all the problems mentioned. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalisation (information at least in the native language and in English), a common XML schema will be proposed. This paper will also discuss an extension of these ideas to the self-description of ITER plant systems, since the problems will be identical. (author)

  10. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    Science.gov (United States)

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  11. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    Directory of Open Access Journals (Sweden)

    Hannes L Röst

    Full Text Available In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size.Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11, making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data.Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  12. Using small XML elements to support relevance

    NARCIS (Netherlands)

    G. Ramirez Camps (Georgina); T.H.W. Westerveld (Thijs); A.P. de Vries (Arjen)

    2006-01-01

    htmlabstractSmall XML elements are often estimated relevant by the retrieval model but they are not desirable retrieval units. This paper presents a generic model that exploits the information obtained from small elements. We identify relationships between small and relevant elements and use this

  13. Using a Combination of UML, C2RM, XML, and Metadata Registries to Support Long-Term Development/Engineering

    Science.gov (United States)

    2003-01-01

    Authenticat’n (XCBF) Authorizat’n (XACML) (SAML) Privacy (P3P) Digital Rights Management (XrML) Content Mngmnt (DASL) (WebDAV) Content Syndicat’n...Registry/ Repository BPSS eCommerce XML/EDI Universal Business Language (UBL) Internet & Computing Human Resources (HR-XML) Semantic KEY XML SPECIFICATIONS

  14. The application of XML in the effluents data modeling of nuclear facilities

    International Nuclear Information System (INIS)

    Yue Feng; Lin Quanyi; Yue Huiguo; Zhang Yan; Zhang Peng; Cao Jun; Chen Bo

    2013-01-01

    The radioactive effluent data, which can provide information to distinguish whether facilities, waste disposal, and control system run normally, is an important basis of safety regulation and emergency management. It can also provide the information to start emergency alarm system as soon as possible. XML technology is an effective tool to realize the standard of effluent data exchange, in favor of data collection, statistics and analysis, strengthening the effectiveness of effluent regulation. This paper first introduces the concept of XML, the choices of effluent data modeling method, and then emphasizes the process of effluent model, finally the model and application are shown, While there is deficiency about the application of XML in the effluents data modeling of nuclear facilities, it is a beneficial attempt to the informatization management of effluents. (authors)

  15. IR and OLAP in XML document warehouses

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    In this paper we propose to combine IR and OLAP (On-Line Analytical Processing) technologies to exploit a warehouse of text-rich XML documents. In the system we plan to develop, a multidimensional implementation of a relevance modeling document model will be used for interactively querying...

  16. Type Checking with XML Schema in XACT

    DEFF Research Database (Denmark)

    Kirkegaard, Christian; Møller, Anders

    to support XML Schema as type formalism. The technique is able to model advanced features, such as type derivations and overloaded local element declarations, and also datatypes of attribute values and character data. Moreover, we introduce optional type annotations to improve modularity of the type checking...

  17. Demosaicing and Superresolution for Color Filter Array via Residual Image Reconstruction and Sparse Representation

    OpenAIRE

    Sun, Guangling

    2012-01-01

    A framework of demosaicing and superresolution for color filter array (CFA) via residual image reconstruction and sparse representation is presented.Given the intermediate image produced by certain demosaicing and interpolation technique, a residual image between the final reconstruction image and the intermediate image is reconstructed using sparse representation.The final reconstruction image has richer edges and details than that of the intermediate image. Specifically, a generic dictionar...

  18. Treating JSON as a subset of XML

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2012-01-01

    textabstractXForms 1.0 was an XML technology originally designed as a replacement for HTML Forms. In addressing certain shortcomings of XForms 1.0, the next version, XForms 1.1 became far more than a forms language, but a declarative application language where application production time could be

  19. Enterprise Architecture Analysis with XML

    OpenAIRE

    Boer, Frank; Bonsangue, Marcello; Jacob, Joost; Stam, A.; Torre, Leon

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural elements and their relationships. A semantic model is defined as a formal interpretation of the symbolic model. This provides a formal approach to the design of architectural description languages and a g...

  20. The CostGlue XML Schema

    OpenAIRE

    Furfari, Francesco; Potort?, Francesco; Savić, Dragan

    2008-01-01

    An XML schema for scientific metadata is described. It is used for the CostGlue archival program, developed in the framework of the European Union COST Action 285: "Modelling and simulation tools for research in emerging multi-service telecommunications". The schema is freely available under the GNU LGPL license at http://wnet.isti.cnr.it/software/costglue/schema/2007/CostGlue.xsd, or at its official repository, at http://lt.fe.uni-lj. si/costglue/schema/2007/costglue.xsd.

  1. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. An XML Based Knowledge Management System for e-Collaboration and e-Learning

    Directory of Open Access Journals (Sweden)

    Varun Gopalakrishna

    2004-02-01

    Full Text Available This paper presents the development, key features, and the implementation principles of a sustainable and scaleable knowledge management system (KMS prototype for creating, capturing, organizing, and managing digital information in the form of Extensible Markup Language (XML documents and other popular file formats. It is aimed to provide a platform for global, instant, and secure access to and dissemination of information within a knowledge-intensive organization or a cluster of organizations through Internet or intranet. A three-tier system architecture was chosen for the KMS to provide performance and scalability while enabling future development that supports global, secure, real-time, and multi-media communication of information and knowledge among team members separated by great distance. An XML Content Server has been employed in this work to store, index, and retrieve large volumes of XML and binary content.

  3. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Science.gov (United States)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  4. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Directory of Open Access Journals (Sweden)

    O. Naito

    2017-01-01

    Full Text Available A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  5. Interpreting XML documents via an RDF schema

    NARCIS (Netherlands)

    Klein, Michel; Handschuh, Siegfried; Staab, Steffen

    2003-01-01

    One of the major problems in the realization of the vision of the ``Semantic Web''; is the transformation of existing web data into sources that can be processed and used by machines. This paper presents a procedure that can be used to turn XML documents into knowledge structures, by interpreting

  6. XML schemas for common bioinformatic data types and their application in workflow systems.

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  7. XML schemas for common bioinformatic data types and their application in workflow systems

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  8. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    Science.gov (United States)

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  9. Implementing XML Schema Naming and Design Rules

    Energy Technology Data Exchange (ETDEWEB)

    Lubell, Joshua [National Institute of Standards and Technology (NIST); Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Harvey, Betty [Electronic Commerce Connection, Inc.

    2006-08-01

    We are building a methodology and tool kit for encoding XML schema Naming and Design Rules (NDRs) in a computer-interpretable fashion, enabling automated rule enforcement and improving schema quality. Through our experience implementing rules from various NDR specifications, we discuss some issues and offer practical guidance to organizations grappling with NDR development.

  10. 78 FR 28732 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Science.gov (United States)

    2013-05-16

    ...] Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema AGENCY: Federal... the SUPPLEMENTARY INFORMATION Section below for details. DATES: The XML is now available at the links mentioned below. FOR FURTHER INFORMATION CONTACT: Christina Switzer, Office of the General Counsel, Federal...

  11. The realization of the storage of XML and middleware-based data of electronic medical records

    International Nuclear Information System (INIS)

    Liu Shuzhen; Gu Peidi; Luo Yanlin

    2007-01-01

    In this paper, using the technology of XML and middleware to design and implement a unified electronic medical records storage archive management system and giving a common storage management model. Using XML to describe the structure of electronic medical records, transform the medical data from traditional 'business-centered' medical information into a unified 'patient-centered' XML document and using middleware technology to shield the types of the databases at different departments of the hospital and to complete the information integration of the medical data which scattered in different databases, conducive to information sharing between different hospitals. (authors)

  12. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  13. Encoding of Fundamental Chemical Entities of Organic Reactivity Interest using chemical ontology and XML.

    Science.gov (United States)

    Durairaj, Vijayasarathi; Punnaivanam, Sankar

    2015-09-01

    Fundamental chemical entities are identified in the context of organic reactivity and classified as appropriate concept classes namely ElectronEntity, AtomEntity, AtomGroupEntity, FunctionalGroupEntity and MolecularEntity. The entity classes and their subclasses are organized into a chemical ontology named "ChemEnt" for the purpose of assertion, restriction and modification of properties through entity relations. Individual instances of entity classes are defined and encoded as a library of chemical entities in XML. The instances of entity classes are distinguished with a unique notation and identification values in order to map them with the ontology definitions. A model GUI named Entity Table is created to view graphical representations of all the entity instances. The detection of chemical entities in chemical structures is achieved through suitable algorithms. The possibility of asserting properties to the entities at different levels and the mechanism of property flow within the hierarchical entity levels is outlined. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. The appropriateness of XML for diagnostic description

    Energy Technology Data Exchange (ETDEWEB)

    Neto, A. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal)], E-mail: andre.neto@cfn.ist.utl.pt; Lister, J.B. [CRPP-EPFL, Association EURATOM-Confederation Suisse, 1015 Lausanne (Switzerland); Fernandes, H. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Yonekawa, I. [JAEA, Japan Atomic Energy Agency Naka (Japan); Varandas, C.A.F. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal)

    2007-10-15

    A standard for the self-description of fusion plasma diagnostics will be required in the near future. The motivation is to maintain and organize the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money. Since there is no existing standard to organize this kind of information, every EU Association stores and organizes each experiment in different ways. This can lead to severe problems when the particular organization schema is poorly documented. Standardization is the key to solve these problems. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalization (information at least in English and a local language). This problem will be met on the ITER project, for which a solution is essential. A strong candidate solution is the Extensible Markup Language (XML). In this paper, a review of the current status of XML related technologies will be presented.

  15. The appropriateness of XML for diagnostic description

    International Nuclear Information System (INIS)

    Neto, A.; Lister, J.B.; Fernandes, H.; Yonekawa, I.; Varandas, C.A.F.

    2007-01-01

    A standard for the self-description of fusion plasma diagnostics will be required in the near future. The motivation is to maintain and organize the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money. Since there is no existing standard to organize this kind of information, every EU Association stores and organizes each experiment in different ways. This can lead to severe problems when the particular organization schema is poorly documented. Standardization is the key to solve these problems. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalization (information at least in English and a local language). This problem will be met on the ITER project, for which a solution is essential. A strong candidate solution is the Extensible Markup Language (XML). In this paper, a review of the current status of XML related technologies will be presented

  16. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  17. XML for Detector Description at GLAST

    Energy Technology Data Exchange (ETDEWEB)

    Bogart, Joanne

    2002-04-30

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector.

  18. XML for detector description at GLAST

    International Nuclear Information System (INIS)

    Bogart, J.; Favretto, D.; Giannitrapani, R.

    2001-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. The author describes its use for a GLAST detector

  19. XML for Detector Description at GLAST

    International Nuclear Information System (INIS)

    Bogart, Joanne

    2002-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector

  20. XML/TEI Stand-off Markup. One step beyond.

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena

    2018-01-01

    Stand-off markup is widely considered as a possible solution for overcoming the limitation of inline XML markup, primarily dealing with multiple overlapping hierarchies. Considering previous contributions on the subject and implementations of stand-off markup, we propose a new TEI-based model for

  1. A Database Approach to Content-based XML retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2003-01-01

    This paper describes a rst prototype system for content-based retrieval from XML data. The system's design supports both XPath queries and complex information retrieval queries based on a language modelling approach to information retrieval. Evaluation using the INEX benchmark shows that it is

  2. Comparing Emerging XML Based Formats from a Multi-discipline Perspective

    Science.gov (United States)

    Sawyer, D. M.; Reich, L. I.; Nikhinson, S.

    2002-12-01

    This paper analyzes the similarity and differences among several examples of an emerging generation of Scientific Data Formats that are based on XML technologies. Some of the factors evaluated include the goals of these efforts, the data models, and XML technologies used, and the maturity of currently available software. This paper then investigates the practicality of developing a single set of structural data objects and basic scientific concepts, such as units, that could be used across discipline boundaries and extended by disciplines and missions to create Scientific Data Formats for their communities. This analysis is partly based on an effort sponsored by the ESDIS office at GSFC to compare the Earth Science Markup Language (ESML) and the eXtensible Data Format( XDF), two members of this new generation of XML based Data Description Languages that have been developed by NASA funded efforts in recent years. This paper adds FITSML and potentially CDFML to the list of XML based Scientific Data Formats discussed. This paper draws heavily a Formats Evolution Process Committee (http://ssdoo.gsfc.nasa.gov/nost/fep/) draft white paper primarily developed by Lou Reich, Mike Folk and Don Sawyer to assist the Space Science community in understanding Scientific Data Formats. One of primary conclusions of that paper is that a scientific data format object model should be examined along two basic axes. The first is the complexity of the computer/mathematical data types supported and the second is the level of scientific domain specialization incorporated. This paper also discusses several of the issues that affect the decision on whether to implement a discipline or project specific Scientific Data Format as a formal extension of a general purpose Scientific Data Format or to implement the APIs independently.

  3. CWRML: representing crop wild relative conservation and use data in XML.

    Science.gov (United States)

    Moore, Jonathan D; Kell, Shelagh P; Iriondo, Jose M; Ford-Lloyd, Brian V; Maxted, Nigel

    2008-02-25

    Crop wild relatives are wild species that are closely related to crops. They are valuable as potential gene donors for crop improvement and may help to ensure food security for the future. However, they are becoming increasingly threatened in the wild and are inadequately conserved, both in situ and ex situ. Information about the conservation status and utilisation potential of crop wild relatives is diverse and dispersed, and no single agreed standard exists for representing such information; yet, this information is vital to ensure these species are effectively conserved and utilised. The European Community-funded project, European Crop Wild Relative Diversity Assessment and Conservation Forum, determined the minimum information requirements for the conservation and utilisation of crop wild relatives and created the Crop Wild Relative Information System, incorporating an eXtensible Markup Language (XML) schema to aid data sharing and exchange. Crop Wild Relative Markup Language (CWRML) was developed to represent the data necessary for crop wild relative conservation and ensure that they can be effectively utilised for crop improvement. The schema partitions data into taxon-, site-, and population-specific elements, to allow for integration with other more general conservation biology schemata which may emerge as accepted standards in the future. These elements are composed of sub-elements, which are structured in order to facilitate the use of the schema in a variety of crop wild relative conservation and use contexts. Pre-existing standards for data representation in conservation biology were reviewed and incorporated into the schema as restrictions on element data contents, where appropriate. CWRML provides a flexible data communication format for representing in situ and ex situ conservation status of individual taxa as well as their utilisation potential. The development of the schema highlights a number of instances where additional standards-development may

  4. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  5. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    Science.gov (United States)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  6. WaterML: an XML Language for Communicating Water Observations Data

    Science.gov (United States)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  7. XML Schema of PaGE-OM: page-om.xsd [

    Lifescience Database Archive (English)

    Full Text Available one or more variation assays (e.g. assay multiplexing Assay_set). Note: These are optional laboratory specif...fication is used for data exchange formats (e.g. xml-schema). Therefore, it has optional direct associations

  8. XML for nuclear instrument control and monitoring: an approach towards standardisation

    International Nuclear Information System (INIS)

    Bharade, S.K.; Ananthakrishnan, T.S.; Kataria, S.K.; Singh, S.K.

    2004-01-01

    Communication among heterogeneous system with applications running under different operating systems and applications developed under different platforms has undergone rapid changes due to the adoption of XML standards. These are being developed for different industries like Chemical, Medical, Commercial etc. The High Energy Physics community has already a standard for exchange of data among different applications , under heterogeneous distributed systems like the CMS Data Acquisition System. There are a large number of Nuclear Instruments supplied by different manufactures which are increasingly getting connected. This approach is getting wider acceptance in instruments at reactor sites, accelerator sites and complex nuclear experiments -especially at centres like CERN. In order for these instruments to be able to describe the data which is available from them in a platform independent manner XML approach has been developed. This paper is the first attempt at Electronics Division for proposing an XML standard for control, monitoring, Data Acquisition and Analysis generated by Nuclear Instruments at Accelerator sites, Nuclear Reactor plant and Laboratory. The gamut of Nuclear Instruments include Multichannel Analysers, Health Physics Instruments, Accelerator Control Systems, Reactor Regulating systems, Flux mapping Systems etc. (author)

  9. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  10. SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode

    International Nuclear Information System (INIS)

    Yan, Y; Mao, W; Jiang, S

    2014-01-01

    Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor. In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials

  11. A Novel Approach for Configuring The Stimulator of A BCI Framework Using XML

    Directory of Open Access Journals (Sweden)

    Indar Sugiarto

    2009-08-01

    Full Text Available In a working BCI framework, all aspects must be considered as an integral part that contributes to the successful operation of a BCI system. This also includes the development of robust but flexible stimulator, especially the one that closely related to the feedback of a BCI system. This paper describes a novel approach in providing flexible visual stimulator using XML which has been applied for a BCI (brain-computer interface framework. Using XML file format for configuring the visual stimulator of a BCI system, we can develop BCI applications which can accommodate many experiment strategies in BCI research. The BCI framework and its configuration platform is developed using C++ programming language which incorporate Qt’s most powerful XML parser named QXmlStream. The implementation and experiment shows that the XML configuration file can be well executed within the proposed BCI framework. Beside its capability in presenting flexible flickering frequencies and text formatting for SSVEP-based BCI, the configuration platform also provides 3 shapes, 16 colors, and 5 distinct feedback bars. It is not necessary to increase the number of shapes nor colors since those parameters are less important for the BCI stimulator. The proposed method can then be extended to enhance the usability of currently existed BCI framework such as BF++ Toys and BCI 2000.

  12. A generalized wavelet extrema representation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  13. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  14. New Path Based Index Structure for Processing CAS Queries over XML Database

    Directory of Open Access Journals (Sweden)

    Krishna Asawa

    2017-01-01

    Full Text Available Querying nested data has become one of the most challenging issues for retrieving desired information from the Web. Today diverse applications generate a tremendous amount of data in different formats. These data and information exchanged on the Web are commonly expressed as nested representation such as XML, JSON, etc. Unlike the traditional database system, they don't have a rigid schema. In general, the nested data is managed by storing data and its structures separately which significantly reduces the performance of data retrieving. Ensuring efficiency of processing queries which locates the exact positions of the elements has become a big challenging issue. There are different indexing structures which have been proposed in the literature to improve the performance of the query processing on the nested structure. Most of the past researches on nested structure concentrate on the structure alone. This paper proposes new index structure which combines siblings of the terminal nodes as one path which efficiently processes twig queries with less number of lookups and joins. The proposed approach is compared with some of the existing approaches. The results also show that they are processed with better performance compared to the existing ones.

  15. Vague element selection and query rewriting for XML retrieval

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; de Jong, Franciska M.G.; Kraaij, W.

    In this paper we present the extension of our prototype three-level database system (TIJAH) developed for structured information retrieval. The extension is aimed at modeling vague search on XML elements. All three levels (conceptual, logical, and physical) of the TIJAH system are enhanced to

  16. IEEE 1451.1 Standard and XML Web Services: a Powerful Combination to Build Distributed Measurement and Control Systems

    OpenAIRE

    Viegas, Vítor; Pereira, José Dias; Girão, P. Silva

    2006-01-01

    In 2005, we presented the NCAP/XML, a prototype of NCAP (Network Capable Application Processor) that runs under the .NET Framework and makes available its functionality through a set of Web Services using XML (eXtended Markup Language). Giving continuity to this project, it is time to explain how to use the NCAP/XML to build a Distributed Measurement and Control System (DMCS) compliant with the 1451.1 Std. This paper is divided in two main parts: in the first part, we present the new software...

  17. An XML-based system for synthesis of data from disparate databases.

    Science.gov (United States)

    Kurc, Tahsin; Janies, Daniel A; Johnson, Andrew D; Langella, Stephen; Oster, Scott; Hastings, Shannon; Habib, Farhat; Camerlengo, Terry; Ervin, David; Catalyurek, Umit V; Saltz, Joel H

    2006-01-01

    Diverse data sets have become key building blocks of translational biomedical research. Data types captured and referenced by sophisticated research studies include high throughput genomic and proteomic data, laboratory data, data from imagery, and outcome data. In this paper, the authors present the application of an XML-based data management system to support integration of data from disparate data sources and large data sets. This system facilitates management of XML schemas and on-demand creation and management of XML databases that conform to these schemas. They illustrate the use of this system in an application for genotype-phenotype correlation analyses. This application implements a method of phenotype-genotype correlation based on phylogenetic optimization of large data sets of mouse SNPs and phenotypic data. The application workflow requires the management and integration of genomic information and phenotypic data from external data repositories and from the results of phenotype-genotype correlation analyses. Our implementation supports the process of carrying out a complex workflow that includes large-scale phylogenetic tree optimizations and application of Maddison's concentrated changes test to large phylogenetic tree data sets. The data management system also allows collaborators to share data in a uniform way and supports complex queries that target data sets.

  18. New XML-Based Files: Implications for Forensics

    Science.gov (United States)

    2009-04-01

    previously unknown social networks.4 We can use unique identi!ers that survived copying and pasting to show plagiarism . Unique identi!ers can also raise...the ODF and OOX speci!- cations to standards bodies, surprisingly few technical articles have published details about the new XML document !le...Sharp, George Dinolt, Beth Rosen- berg, and the anonymous reviewers for their comments on previous versions of this article . This work was funded in

  19. About Hierarchical XML Structures, Replacement of Relational Data Structures in Construction and Implementation of ERP Systems

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The projects essential objective is to develop a new ERP system, of homogeneous nature, based on XML structures, as a possible replacement for classic ERP systems. The criteria that guide the objective definition are modularity, portability and Web connectivity. This objective is connected to a series of secondary objectives, considering that the technological approach will be filtered through the economic, social and legislative environment for a validation-by-context study. Statistics and cybernetics are to be used for simulation purposes. The homogeneous approach is meant to provide strong modularity and portability, in relation with the n-tier principles, but the main advantage of the model is its opening to the semantic Web, based on a Small enterprise ontology defined with XML-driven languages. Shockwave solutions will be used for implementing client-oriented hypermedia elements and an XML Gate will be de-fined between black box modules, for a clear separation with obvious advantages. Security and the XMLTP project will be an important issue for XML transfers due to the conflict between the open architecture of the Web, the readability of XML data and the privacy elements which have to be preserved within a business environment. The projects finality is oriented on small business but the semantic Web perspective and the surprising new conflict between hierarchical/network data structures and relational ones will certainly widen its scope. The proposed model is meant to fulfill the IT compatibility requirements of the European environment, defined as a knowledge society. The paper is a brief of the contributions of the team re-search at the project type A applied to CNCSIS "Research on the Role of XML in Building Extensible and Homogeneous ERP Systems".

  20. Phase II-SOF Knowledge Coupler-Based Phase I XML Schema

    National Research Council Canada - National Science Library

    Whitlock, Warren L

    2005-01-01

    ... a list of diagnostic choices in an XML-tagged database. An analysis of the search function indicates that the native search capability of the SOFMH does not inherently contain the requirements to sustain a diagnostic tool...

  1. Defining the XML schema matching problem for a personal schema based query answering system

    OpenAIRE

    Smiljanic, M.; van Keulen, Maurice; Jonker, Willem

    2004-01-01

    In this report, we analyze the problem of personal schema matching. We define the ingredients of the XML schema matching problem using constraint logic programming. This allows us to thourougly investigate specific matching problems. We do not have the ambition to provide for a formalism that covers all kinds of schema matching problems. The target is specifically personal schema matching using XML. The report is organized as follows. Chapter 2 provides a detailed description of our research ...

  2. A polygon soup representation for free viewpoint video

    Science.gov (United States)

    Colleu, T.; Pateux, S.; Morin, L.; Labit, C.

    2010-02-01

    This paper presents a polygon soup representation for multiview data. Starting from a sequence of multi-view video plus depth (MVD) data, the proposed representation takes into account, in a unified manner, different issues such as compactness, compression, and intermediate view synthesis. The representation is built in two steps. First, a set of 3D quads is extracted using a quadtree decomposition of the depth maps. Second, a selective elimination of the quads is performed in order to reduce inter-view redundancies and thus provide a compact representation. Moreover, the proposed methodology for extracting the representation allows to reduce ghosting artifacts. Finally, an adapted compression technique is proposed that limits coding artifacts. The results presented on two real sequences show that the proposed representation provides a good trade-off between rendering quality and data compactness.

  3. XML schemas and mark-up practices of taxonomic literature.

    Science.gov (United States)

    Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat

    2011-01-01

    We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

  4. XML-Based Visual Specification of Multidisciplinary Applications

    Science.gov (United States)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  5. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application

    Science.gov (United States)

    Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-01-01

    Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894

  6. The XML approach to implementing space link extension service management

    Science.gov (United States)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  7. A Survey and Analysis of Access Control Architectures for XML Data

    National Research Council Canada - National Science Library

    Estlund, Mark J

    2006-01-01

    .... Business uses XML to leverage the full potential of the Internet for e-Commerce. The government wants to leverage the ability to share information across many platforms between divergent agencies...

  8. ReDaX (Relational to XML data publishing) un framework liviano para publicar información relacional

    OpenAIRE

    Ormeño, Emilio G.; Berón, Fabián R.

    2003-01-01

    Quizás uno de los mayores inconvenientes que posee XML, es que no ha sido pensado para almacenar información, en vez de ello, ha sido diseñado para permitir la publicación y el intercambio de información a través de la especificación XSL (eXtensible Stylesheet Languaje). Sin embargo, la mayor parte de la información de una empresa se encuentra en bases de datos relacionales. La publicación de información vía XML, es el proceso de transformar la información relacional en un documento XML para ...

  9. Automatically Generating a Distributed 3D Battlespace Using USMTF and XML-MTF Air Tasking Order, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  10. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    Science.gov (United States)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  11. Web geoprocessing services on GML with a fast XML database

    African Journals Online (AJOL)

    user

    tasks on those data and return response messages and/or data outputs. To achieve an efficient ..... Even though GML data is based on XML data model and can be ..... language, and a stylesheet language CSS (Cascading Style Sheets). It is.

  12. Von der XML-Datenbasis zur nutzergerecht strukturierten Web-Site

    NARCIS (Netherlands)

    Freitag, D.; Wombacher, Andreas

    2002-01-01

    Due to the increasing use of information an the WWW by different sorts of device types, content providers have to solve the problem, how to present the fitting contents both effectively and taking into account the needs of the device type. The XML-language-family offers the possibility to present

  13. Improving the Virtual Learning Development Processes Using XML Standards.

    Science.gov (United States)

    Suss, Kurt; Oberhofer, Thomas

    2002-01-01

    Suggests that distributed learning environments and content often lack a common basis for the exchange of learning materials, which can hinder or even delay innovation and delivery of learning technology. Standards for platforms and authoring may provide a way to improve interoperability and cooperative development. Provides an XML-based approach…

  14. Personalization of XML Content Browsing Based on User Preferences

    Science.gov (United States)

    Encelle, Benoit; Baptiste-Jessel, Nadine; Sedes, Florence

    2009-01-01

    Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. In this direction, we introduce concepts that result in the generation of personalized multimodal user interfaces for browsing XML content. User requirements concerning the browsing of a specific content type can be specified by means of…

  15. Castles Made of Sand: Building Sustainable Digitized Collections Using XML.

    Science.gov (United States)

    Ragon, Bart

    2003-01-01

    Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)

  16. The abstract representations in speech processing.

    Science.gov (United States)

    Cutler, Anne

    2008-11-01

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.

  17. XML schema for atomic and molecular data. Summary report of consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.

    2008-04-01

    Advanced developments in computer technologies offer exciting opportunities for new distribution tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the A+M Data Unit initiated within the collaborative efforts of the DCN (Data Centre Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM'PSI) based on XML (eXtensible Markup Language). A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon meets biannually to discuss progress made on the XML schema, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. (author)

  18. Graph-representation of oxidative folding pathways

    Directory of Open Access Journals (Sweden)

    Kaján László

    2005-01-01

    Full Text Available Abstract Background The process of oxidative folding combines the formation of native disulfide bond with conformational folding resulting in the native three-dimensional fold. Oxidative folding pathways can be described in terms of disulfide intermediate species (DIS which can also be isolated and characterized. Each DIS corresponds to a family of folding states (conformations that the given DIS can adopt in three dimensions. Results The oxidative folding space can be represented as a network of DIS states interconnected by disulfide interchange reactions that can either create/abolish or rearrange disulfide bridges. We propose a simple 3D representation wherein the states having the same number of disulfide bridges are placed on separate planes. In this representation, the shuffling transitions are within the planes, and the redox edges connect adjacent planes. In a number of experimentally studied cases (bovine pancreatic trypsin inhibitor, insulin-like growth factor and epidermal growth factor, the observed intermediates appear as part of contiguous oxidative folding pathways. Conclusions Such networks can be used to visualize folding pathways in terms of the experimentally observed intermediates. A simple visualization template written for the Tulip package http://www.tulip-software.org/ can be obtained from V.A.

  19. Automatically Generating a Distributed 3D Virtual Battlespace Using USMTF and XML-MTF Air Tasking Orders, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  20. An XML description of detector geometries for GEANT4

    International Nuclear Information System (INIS)

    Figgins, J.; Walker, B.; Comfort, J.R.

    2006-01-01

    A code has been developed that enables the geometry of detectors to be specified easily and flexibly in the XML language, for use in the Monte Carlo program GEANT4. The user can provide clear documentation of the geometry without being proficient in the C++ language of GEANT4. The features and some applications are discussed

  1. Fuzzy Approaches to Flexible Querying in XML Retrieval

    Directory of Open Access Journals (Sweden)

    Stefania Marrara

    2016-04-01

    Full Text Available In this paper we review some approaches to flexible querying in XML that apply several techniques among which Fuzzy Set Theory. In particular we focus on FleXy, a flexible extension of XQuery-FT that was developed as a library on the open source engine Base-X. We then present PatentLight, a tool for patent retrieval that was developed to show the expressive power of Flexy.

  2. Visual dictionaries as intermediate features in the human brain

    Directory of Open Access Journals (Sweden)

    Kandan eRamakrishnan

    2015-01-01

    Full Text Available The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2 and V3. However BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

  3. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  4. A Short Story about XML Schemas, Digital Preservation and Format Libraries

    Directory of Open Access Journals (Sweden)

    Steve Knight

    2012-03-01

    Full Text Available One morning we came in to work to find that one of our servers had made 1.5 million attempts to contact an external server in the preceding hour. It turned out that the calls were being generated by the Library’s digital preservation system (Rosetta while attempting to validate XML Schema Definition (XSD declarations included in the XML files of the Library’s online newspaper application Papers Past, which we were in the process of loading into Rosetta. This paper describes our response to this situation and outlines some of the issues that needed to be canvassed before we were able to arrive at a suitable solution, including the digital preservation status of these XSDs; their impact on validation tools, such as JHOVE; and where these objects should reside if they are considered material to the digital preservation process.

  5. CREATING OPEN DIGITAL LIBRARY USING XML: IMPLEMENTATION OF OAI-PMH

    OpenAIRE

    M. Vesely; T. Baron; J.Y. Le Meur; T. Simko

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  6. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  7. An effective XML based name mapping mechanism within StoRM

    International Nuclear Information System (INIS)

    Corso, E; Forti, A; Ghiselli, A; Magnoni, L; Zappi, R

    2008-01-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation

  8. Evaluation of efficient XML interchange (EXI) for large datasets and as an alternative to binary JSON encodings

    OpenAIRE

    Hill, Bruce W.

    2015-01-01

    Approved for public release; distribution is unlimited Current and emerging Navy information concepts, including network-centric warfare and Navy Tactical Cloud, presume high network throughput and interoperability. The Extensible Markup Language (XML) addresses the latter requirement, but its verbosity is problematic for afloat networks. JavaScript Object Notation (JSON) is an alternative to XML common in web applications and some non-relational databases. Compact, binary encodings exist ...

  9. Towards privacy-preserving XML transformation

    DEFF Research Database (Denmark)

    Jensen, Meiko; Kerschbaum, Florian

    2011-01-01

    In composite web services one can only either hide the identities of the participants or provide end-to-end confidentiality via encryption. For a designer of inter organizational business processes this implies that she either needs to reveal her suppliers or force her customers to reveal...... their information. In this paper we present a solution to the encrypted data modification problem and reconciliate this apparent conflict. Using a generic sender-transformer-recipient example scenario, we illustrate the steps required for applying XML transformations to encrypted data, present the cryptographic...... building blocks, and give an outlook on advantages and weaknesses of the proposed encryption scheme. The transformer is then able to offer composite services without itself learning the content of the messages....

  10. IMPROVING THE VIRTUAL LEARNING DEVELOPMENT PROCESSES USING XML STANDARDS

    Directory of Open Access Journals (Sweden)

    Kurt Suss

    2002-06-01

    Full Text Available Distributed Icarning environments and content often lack a common basis for the cxchange of learning materials. This delays, or even hinders, both innovation and delivery of learning tecnology. Standards for platforms and authoring may provide a way to improve interoperability and cooperative development. This article provides an XML-based approach to this problem creaied by the IMS Global Learning Consortium.

  11. XTCE and XML Database Evolution and Lessons from JWST, LandSat, and Constellation

    Science.gov (United States)

    Gal-Edd, Jonathan; Kreistle, Steven; Fatig. Cirtos; Jones, Ronald

    2008-01-01

    The database organizations within three different NASA projects have advanced current practices by creating database synergy between the various spacecraft life cycle stakeholders and educating users in the benefits of the Consultative Committee for Space Data Systems (CCSDS) XML Telemetry and Command Exchange (XTCE) format. The combination of XML for managing program data and CCSDS XTCE for exchange is a robust approach that will meet all user requirements using Standards and Non proprietary tools. COTS tools for XTCEKML are very wide and varied. To combine together various low cost and free tools can be more expensive in the long run than choosing a more expensive COTS tool that meets all the needs. This was especially important when deploying in 32 remote sites with no need for licenses. A common mission XTCEKML format between dissimilar systems is possible and is not difficult. Command XMLKTCE is more complex than telemetry and the use of XTCEKML metadata to describe pages and scripts is needed due to the proprietary nature of most current ground systems. Other mission and science products such as spacecraft loads, science image catalogs, and mission operation procedures can all be described with XML as well to increase there flexibility as systems evolve and change. Figure 10 is an example of a spacecraft table load. The word is out and the XTCE community is growing, The f sXt TCE user group was held in October and in addition to ESAESOC, SC02000, and CNES identified several systems based on XTCE. The second XTCE user group is scheduled for March 10, 2008 with LDMC and others joining. As the experience with XTCE grows and the user community receives the promised benefits of using XTCE and XML the interest is growing fast.

  12. Rosetta Ligand docking with flexible XML protocols.

    Science.gov (United States)

    Lemmon, Gordon; Meiler, Jens

    2012-01-01

    RosettaLigand is premiere software for predicting how a protein and a small molecule interact. Benchmark studies demonstrate that 70% of the top scoring RosettaLigand predicted interfaces are within 2Å RMSD from the crystal structure [1]. The latest release of Rosetta ligand software includes many new features, such as (1) docking of multiple ligands simultaneously, (2) representing ligands as fragments for greater flexibility, (3) redesign of the interface during docking, and (4) an XML script based interface that gives the user full control of the ligand docking protocol.

  13. Using XML Configuration-Driven Development to Create a Customizable Ground Data System

    Science.gov (United States)

    Nash, Brent; DeMore, Martha

    2009-01-01

    The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.

  14. An XML-based interchange format for genotype-phenotype data.

    Science.gov (United States)

    Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B

    2008-02-01

    Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.

  15. Integration of HTML documents into an XML-based knowledge repository.

    Science.gov (United States)

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  16. A Conversion Tool for Mathematical Expressions in Web XML Files.

    Science.gov (United States)

    Ohtake, Nobuyuki; Kanahori, Toshihiro

    2003-01-01

    This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…

  17. Automating data acquisition into ontologies from pharmacogenetics relational data sources using declarative object definitions and XML.

    Science.gov (United States)

    Rubin, Daniel L; Hewett, Micheal; Oliver, Diane E; Klein, Teri E; Altman, Russ B

    2002-01-01

    Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.

  18. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application.

    Science.gov (United States)

    Peissig, Peggy; Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-09-13

    The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. ©Peggy Peissig, Kelsey M Schwei, Christopher Kadolph, Joseph Finamore, Efrain Cancel, Catherine A McCarty, Asha Okorie, Kate L Thomas, Jennifer Allen Pacheco, Jyotishman Pathak, Stephen B Ellis, Joshua C Denny, Luke V Rasmussen, Gerard Tromp, Marc S Williams, Tamara R Vrabec, Murray H Brilliant. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.09.2017.

  19. Pictorial Real, Historical Intermedial. Digital Aesthetics and the Representation of History in Eric Rohmer’s The Lady and the Duke

    Directory of Open Access Journals (Sweden)

    Tagliani Giacomo

    2016-09-01

    Full Text Available In The Lady and the Duke (2001, Eric Rohmer provides an unusual and “conservative” account of the French Revolution by recurring to classical and yet “revolutionary” means. The interpolation between painting and film produces a visual surface which pursues a paradoxical effect of immediacy and verisimilitude. At the same time though, it underscores the represented nature of the images in a complex dynamic of “reality effect” and critical meta-discourse. The aim of this paper is the analysis of the main discursive strategies deployed by the film to disclose an intermedial effectiveness in the light of its original digital aesthetics. Furthermore, it focuses on the problematic relationship between image and reality, deliberately addressed by Rohmer through the dichotomy simulation/illusion. Finally, drawing on the works of Louis Marin, it deals with the representation of history and the related ideology, in order to point out the film’s paradoxical nature, caught in an undecidability between past and present.

  20. Modeling views in the layered view model for XML using UML

    NARCIS (Netherlands)

    Rajugan, R.; Dillon, T.S.; Chang, E.; Feng, L.

    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for

  1. Intermedial Strategies of Memory in Contemporary Novels

    DEFF Research Database (Denmark)

    Tanderup, Sara

    2014-01-01

    , and Judd Morrissey and drawing on the theoretical perspectives of N. Katherine Hayles (media studies) and Andreas Huyssen (cultural memory studies), Tanderup argues that recent intermedial novels reflect a certain nostalgia celebrating and remembering the book as a visual and material object in the age...... of digital media while also highlighting the influence of new media on our cultural understanding and representation of memory and the past....

  2. V4 activity predicts the strength of visual short-term memory representations.

    Science.gov (United States)

    Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F

    2009-06-10

    Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.

  3. A Self-adaptive Scope Allocation Scheme for Labeling Dynamic XML Documents

    NARCIS (Netherlands)

    Shen, Y.; Feng, L.; Shen, T.; Wang, B.

    This paper proposes a self-adaptive scope allocation scheme for labeling dynamic XML documents. It is general, light-weight and can be built upon existing data retrieval mechanisms. Bayesian inference is used to compute the actual scope allocated for labeling a certain node based on both the prior

  4. Motor memory is encoded as a gain-field combination of intrinsic and extrinsic action representations.

    Science.gov (United States)

    Brayanov, Jordan B; Press, Daniel Z; Smith, Maurice A

    2012-10-24

    Actions can be planned in either an intrinsic (body-based) reference frame or an extrinsic (world-based) frame, and understanding how the internal representations associated with these frames contribute to the learning of motor actions is a key issue in motor control. We studied the internal representation of this learning in human subjects by analyzing generalization patterns across an array of different movement directions and workspaces after training a visuomotor rotation in a single movement direction in one workspace. This provided a dense sampling of the generalization function across intrinsic and extrinsic reference frames, which allowed us to dissociate intrinsic and extrinsic representations and determine the manner in which they contributed to the motor memory for a trained action. A first experiment showed that the generalization pattern reflected a memory that was intermediate between intrinsic and extrinsic representations. A second experiment showed that this intermediate representation could not arise from separate intrinsic and extrinsic learning. Instead, we find that the representation of learning is based on a gain-field combination of local representations in intrinsic and extrinsic coordinates. This gain-field representation generalizes between actions by effectively computing similarity based on the (Mahalanobis) distance across intrinsic and extrinsic coordinates and is in line with neural recordings showing mixed intrinsic-extrinsic representations in motor and parietal cortices.

  5. XPIWIT--an XML pipeline wrapper for the Insight Toolkit.

    Science.gov (United States)

    Bartschat, Andreas; Hübner, Eduard; Reischl, Markus; Mikut, Ralf; Stegmaier, Johannes

    2016-01-15

    The Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos. XPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/. johannes.stegmaier@kit.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Research on Heterogeneous Data Exchange based on XML

    Science.gov (United States)

    Li, Huanqin; Liu, Jinfeng

    Integration of multiple data sources is becoming increasingly important for enterprises that cooperate closely with their partners for e-commerce. OLAP enables analysts and decision makers fast access to various materialized views from data warehouses. However, many corporations have internal business applications deployed on different platforms. This paper introduces a model for heterogeneous data exchange based on XML. The system can exchange and share the data among the different sources. The method used to realize the heterogeneous data exchange is given in this paper.

  7. An XML-based configuration system for MAST PCS

    International Nuclear Information System (INIS)

    Storrs, J.; McArdle, G.

    2008-01-01

    MAST PCS, a port of General Atomics' generic Plasma Control System, is a large software system comprising many source files in C and IDL. Application parameters can affect multiple source files in complex ways, making code development and maintenance difficult. The MAST PCS configuration system aims to make the task of the application developer easier, through the use of XML-based configuration files and a configuration tool which processes them. It is presented here as an example of a useful technique with wide application

  8. Summary report of consultants' meeting on XML schema for atomic and molecular data

    International Nuclear Information System (INIS)

    Humbert, D.

    2007-07-01

    Advanced developments in computer technologies offer exciting opportunities for new distributed tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the AMD Unit initiated within the collaborative efforts of the DCN (Data Centre Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM/PSI) based on XML (eXtensible Markup Language). A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon, meets biannually to discuss progress made on the XML schema and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. This meeting is the first such gathering of these specialists in 2007. (author)

  9. XML-based assembly visualization for a multi-CAD digital mock-up system

    International Nuclear Information System (INIS)

    Song, In Ho; Chung, Sung Chong

    2007-01-01

    Using a virtual assembly tool, engineers are able to design accurate and interference free parts without making physical mock-ups. Instead of a single CAD source, several CAD systems are used to design a complex product in a distributed design environment. In this paper, a multi-CAD assembly method is proposed through an XML and the lightweight CAD file. XML data contains a hierarchy of the multi-CAD assembly. The lightweight CAD file produced from various CAD files through the ACIS kemel and InterOp includes not only mesh and B-Rep data, but also topological data. It is used to visualize CAD data and to verify dimensions of the parts. The developed system is executed on desktop computers. It does not require commercial CAD systems to visualize 3D assembly data. Multi-CAD models have been assembled to verify the effectiveness of the developed DMU system on the Internet

  10. Content Management von Leittexten mit XML Topic Maps

    Directory of Open Access Journals (Sweden)

    Johannes Busse

    2003-07-01

    Full Text Available Die Autoren definieren den Umgang mit internet- basierten Informations- und Kommunikationstechnologien als Schlüsselqualifikation für Studierende aller Fachrichtungen. Im vorliegenden Aufsatz beschreiben sie ein Projekt, das der Fachbereich Erziehungswissenschaften der Universität Heidelberg seit 2001 durchführt. Hier werden Studierende der Geistes- und Sozialwissenschaften zu "Lernberatern" ausgebildet, die als Multiplikatoren die notwendigen Kenntnisse erwerben. Die Teilnehmenden erarbeiten nach der "Leittextmethode" selbstgesteuert xml-basierte Contents. Dies setzt den Erwerb von informationstechnischen Kenntnissen voraus, der neben dem Aufbau eines (sowohl technischen als auch sozialen Netzwerks einen Schwerpunkt bildet.

  11. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    Science.gov (United States)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  12. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    Science.gov (United States)

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  13. Clinical map document based on XML (cMDX): document architecture with mapping feature for reporting and analysing prostate cancer in radical prostatectomy specimens.

    Science.gov (United States)

    Eminaga, Okyaz; Hinkelammert, Reemt; Semjonow, Axel; Neumann, Joerg; Abbas, Mahmoud; Koepke, Thomas; Bettendorf, Olaf; Eltze, Elke; Dugas, Martin

    2010-11-15

    The pathology report of radical prostatectomy specimens plays an important role in clinical decisions and the prognostic evaluation in Prostate Cancer (PCa). The anatomical schema is a helpful tool to document PCa extension for clinical and research purposes. To achieve electronic documentation and analysis, an appropriate documentation model for anatomical schemas is needed. For this purpose we developed cMDX. The document architecture of cMDX was designed according to Open Packaging Conventions by separating the whole data into template data and patient data. Analogue custom XML elements were considered to harmonize the graphical representation (e.g. tumour extension) with the textual data (e.g. histological patterns). The graphical documentation was based on the four-layer visualization model that forms the interaction between different custom XML elements. Sensible personal data were encrypted with a 256-bit cryptographic algorithm to avoid misuse. In order to assess the clinical value, we retrospectively analysed the tumour extension in 255 patients after radical prostatectomy. The pathology report with cMDX can represent pathological findings of the prostate in schematic styles. Such reports can be integrated into the hospital information system. "cMDX" documents can be converted into different data formats like text, graphics and PDF. Supplementary tools like cMDX Editor and an analyser tool were implemented. The graphical analysis of 255 prostatectomy specimens showed that PCa were mostly localized in the peripheral zone (Mean: 73% ± 25). 54% of PCa showed a multifocal growth pattern. cMDX can be used for routine histopathological reporting of radical prostatectomy specimens and provide data for scientific analysis.

  14. XML technologies for the Omaha System: a data model, a Java tool and several case studies supporting home healthcare.

    Science.gov (United States)

    Vittorini, Pierpaolo; Tarquinio, Antonietta; di Orio, Ferdinando

    2009-03-01

    The eXtensible markup language (XML) is a metalanguage which is useful to represent and exchange data between heterogeneous systems. XML may enable healthcare practitioners to document, monitor, evaluate, and archive medical information and services into distributed computer environments. Therefore, the most recent proposals on electronic health records (EHRs) are usually based on XML documents. Since none of the existing nomenclatures were specifically developed for use in automated clinical information systems, but were adapted to such use, numerous current EHRs are organized as a sequence of events, each represented through codes taken from international classification systems. In nursing, a hierarchically organized problem-solving approach is followed, which hardly couples with the sequential organization of such EHRs. Therefore, the paper presents an XML data model for the Omaha System taxonomy, which is one of the most important international nomenclatures used in the home healthcare nursing context. Such a data model represents the formal definition of EHRs specifically developed for nursing practice. Furthermore, the paper delineates a Java application prototype which is able to manage such documents, shows the possibility to transform such documents into readable web pages, and reports several case studies, one currently managed by the home care service of a Health Center in Central Italy.

  15. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Metadata Encoding and Transmission Standard (METS). To find datasets in future portals and to make use of this data in own scientific work, proper selection of discovery metadata and application metadata is very important. Some XML-metadata profiles are not suitable for preservation, because version changes are very fast and make it nearly impossible to automate the migration. For other XML-metadata profiles schema definitions are changed after publication of the profile or the schema definitions become inaccessible, which might cause problems during validation of the metadata inside the preservation system [2]. Some metadata profiles are not used widely enough and might not even exist in the future. Eventually, discovery and application metadata have to be embedded into the mdWrap-subtree of the METS-XML. [1] http://www.archivematica.org [2] http://dx.doi.org/10.2218/ijdc.v7i1.215

  16. XTCE: XML Telemetry and Command Exchange Tutorial, XTCE Version 1

    Science.gov (United States)

    Rice, Kevin; Kizzort, Brad

    2008-01-01

    These presentation slides are a tutorial on XML Telemetry and Command Exchange (XTCE). The goal of XTCE is to provide an industry standard mechanism for describing telemetry and command streams (particularly from satellites.) it wiill lower cost and increase validation over traditional formats, and support exchange or native format.XCTE is designed to describe bit streams, that are typical of telemetry and command in the historic space domain.

  17. Upgrading a TCABR data analysis and acquisition system for remote participation using Java, XML, RCP and modern client/server communication/authentication

    International Nuclear Information System (INIS)

    Sa, W.P. de

    2010-01-01

    The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the eXtensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks'.

  18. Upgrading a TCABR data analysis and acquisition system for remote participation using Java, XML, RCP and modern client/server communication/authentication

    Energy Technology Data Exchange (ETDEWEB)

    Sa, W.P. de, E-mail: pires@if.usp.b [Instituto de Fisica, Universidade de Sao Paulo, Rua do Matao, Travessa R, 187 CEP 05508-090 Cidade Universitaria, Sao Paulo (Brazil)

    2010-07-15

    The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the eXtensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks'.

  19. Differential equations for loop integrals in Baikov representation

    Science.gov (United States)

    Bosma, Jorrit; Larsen, Kasper J.; Zhang, Yang

    2018-05-01

    We present a proof that differential equations for Feynman loop integrals can always be derived in Baikov representation without involving dimension-shift identities. We moreover show that in a large class of two- and three-loop diagrams it is possible to avoid squared propagators in the intermediate steps of setting up the differential equations.

  20. The Knowledge Sharing Based on PLIB Ontology and XML for Collaborative Product Commerce

    Science.gov (United States)

    Ma, Jun; Luo, Guofu; Li, Hao; Xiao, Yanqiu

    Collaborative Product Commerce (CPC) has become a brand-new commerce mode for manufacture. In order to promote information communication with each other more efficiently in CPC, a knowledge-sharing framework based on PLIB (ISO 13584) ontology and XML was presented, and its implementation method was studied. At first, according to the methodology of PLIB (ISO 13584), a common ontology—PLIB ontology was put forward which provide a coherent conceptual meaning within the context of CPC domain. Meanwhile, for the sake of knowledge intercommunion via internet, the PLIB ontology formalization description by EXPRESS mode was converted into XML Schema, and two mapping methods were presented: direct mapping approach and meta-levels mapping approach, while the latter was adopted. Based on above work, a parts resource knowledge-sharing framework (CPC-KSF) was put forward and realized, which has been applied in the process of automotive component manufacturing collaborative product commerce.

  1. Developing and Deploying an XML-based Learning Content Management System at the FernUniversität Hagen

    Directory of Open Access Journals (Sweden)

    Gerd Steinkamp

    2005-02-01

    Full Text Available This paper is a report about the FuXML project carried out at the FernUniversität Hagen. FuXML is a Learning Content Management System (LCMS aimed at providing a practical and efficient solution for the issues attributed to authoring, maintenance, production and distribution of online and offline distance learning material. The paper presents the environment for which the system was conceived and describes the technical realisation. We discuss the reasons for specific implementation decisions and also address the integration of the system within the organisational and technical infrastructure of the university.

  2. Enhancement of the Work in Scia Engineer's Environment by Employment of XML Programming Language

    Directory of Open Access Journals (Sweden)

    Kortiš Ján

    2015-12-01

    Full Text Available The productivity of the work of engineers in the design of building structures by applying the rules of technical standards [1] has been increasing by using different software products for recent years. The software products offer engineers new possibilities to design different structures. However, there are problems especially for design of structures with similar static schemes as it is needed to follow the same work-steps. This can be more effective if the steps are done automatically by using a programming language for leading the processes that are done by software. The design process of timber structure which is done in the environment of Scia Engineer software is presented in the article. XML Programming Language is used for automatization of the design and the XML code is modified in the Excel environment by using VBA Programming language [2], [3].

  3. A Whiter Shade of Grey: A new approach to archaeological grey literature using the XML version of the TEI Guidelines

    Directory of Open Access Journals (Sweden)

    Gail Falkingham

    2005-04-01

    Full Text Available This article has arisen through the author's interest in two contemporary issues within archaeology: the production and dissemination of grey literature and the potential of XML. Grey literature is examined, with specific reference to unpublished reports literature produced in the present climate of developer-funded archaeology in England. There are concerns about the accessibility of this literature, both from within and beyond the archaeological profession. The vast majority of reports are word-processed and then printed in hard-copy format for limited distribution. The original, digital document however, has largely been seen as a by-product. Awareness of the importance of these digital reports, and their preservation must be raised. Electronic means of delivery and dissemination via the World Wide Web offer huge potential and present opportunities for new ways of working. Archaeology is not alone in seeking to promote the accessibility of grey literature; indeed there are many disciplines that have created online initiatives aiming to do just this, utilising a variety of means and a range of electronic file formats. The use of XML technology appears to offer many advantages over traditional formats, such as word-processed, PDF and even (XHTML files, particularly with regard to the manipulation and presentation of encoded electronic text. Increasingly, XML technology is being used for electronic delivery and dissemination and the pros and cons of so doing are discussed in this article. This theme has been developed by the author through a 'proof of concept' practical case study of three unpublished grey literature archaeology reports from the North Yorkshire Historic Environment Record. XML documents have been created from the original word-processed electronic reports by the manual application of XML markup, the methodology for which was devised following the XML version of the Text Encoding Initiative's TEI P4 Guidelines. The level of

  4. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    Science.gov (United States)

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  5. APROXIMACIÓN A LA REPRESENTACIÓN EN XML DE OBJETOS DICOM PARA FOTOGRAFÍA MÉDICA DIGITAL

    Directory of Open Access Journals (Sweden)

    Carlos Ruiz

    2007-12-01

    Full Text Available El estándar DICOM (Digital Imaging and Communication in Medicine es un protocolo no propietario para el intercambio de información médica. DICOM representa y define la información de objetos del mundo real tales como una resonancia magnética (MRI, una tomografía computarizada (CT y una fotografía médica digital (VL Photographic, por medio de definiciones de objeto de información llamados IOD. El presente artículo describe una metodología para representar el IOD de una fotografía médica digital de luz visible (VL Photographic Image por intermedio de documentos XML Schema. Estos documentos se utilizan en la creación y validación de documentos XML para representar información clínica técnica asociada a fotografías médicas digitales para su posterior implementación en una aplicación web de teledermatología.DICOM standard (Digital Imaging and Communication in Medicine is a non-proprietary protocol for the medical exchange information. DICOM represents and defines the information of real world objects like a magnetic resonance image (MRI, a computerized axial tomography and a digital medical photography (VL photographic, through information object definitions called IOD. The present article describes a methodology to represent the IOD of a digital medical photography of visible light (VL Photographic Image through XML Schema documents. These documents were used in the creation and validation of XML documents to represent digital medical photographies compiling clinical and technical information for their later implementation in a teledermatology application.

  6. Clinical map document based on XML (cMDX: document architecture with mapping feature for reporting and analysing prostate cancer in radical prostatectomy specimens

    Directory of Open Access Journals (Sweden)

    Bettendorf Olaf

    2010-11-01

    Full Text Available Abstract Background The pathology report of radical prostatectomy specimens plays an important role in clinical decisions and the prognostic evaluation in Prostate Cancer (PCa. The anatomical schema is a helpful tool to document PCa extension for clinical and research purposes. To achieve electronic documentation and analysis, an appropriate documentation model for anatomical schemas is needed. For this purpose we developed cMDX. Methods The document architecture of cMDX was designed according to Open Packaging Conventions by separating the whole data into template data and patient data. Analogue custom XML elements were considered to harmonize the graphical representation (e.g. tumour extension with the textual data (e.g. histological patterns. The graphical documentation was based on the four-layer visualization model that forms the interaction between different custom XML elements. Sensible personal data were encrypted with a 256-bit cryptographic algorithm to avoid misuse. In order to assess the clinical value, we retrospectively analysed the tumour extension in 255 patients after radical prostatectomy. Results The pathology report with cMDX can represent pathological findings of the prostate in schematic styles. Such reports can be integrated into the hospital information system. "cMDX" documents can be converted into different data formats like text, graphics and PDF. Supplementary tools like cMDX Editor and an analyser tool were implemented. The graphical analysis of 255 prostatectomy specimens showed that PCa were mostly localized in the peripheral zone (Mean: 73% ± 25. 54% of PCa showed a multifocal growth pattern. Conclusions cMDX can be used for routine histopathological reporting of radical prostatectomy specimens and provide data for scientific analysis.

  7. Toward a Normalized XML Schema for the GGP Data Archives

    Directory of Open Access Journals (Sweden)

    Alban Gabillon

    2013-04-01

    Full Text Available Since 1997, the Global Geodynamics Project (GGP stations have used a text-based data format. The main drawback of this type of data coding is the lack of data integrity during the data flow processing. As a result, metadata and even data must be checked by human operators. In this paper, we propose a new format for representing the GGP data. This new format is based on the eXtensible Markup Language (XML.

  8. Creating Open Digital Library Using XML Implementation of OAi-PMH Protocol at CERN

    CERN Document Server

    Vesely, M; Le Meur, Jean-Yves; Simko, Tibor

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  9. Creating Open Digital Library Using XML: Implementation of OAi-PMH Protocol at CERN

    OpenAIRE

    Vesely, M; Baron, T; Le Meur, Jean-Yves; Simko, Tibor

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  10. XML Based Business-to-Business E-Commerce Frameworks%基于XML的B2B电子商务构架

    Institute of Scientific and Technical Information of China (English)

    范国闯; 刘庆文; 李京; 钟华

    2002-01-01

    The B2B (Business-to-Business)e-commerce framework solves the key problem-interoperability between enterprise during e-commerce transactions.Firstly,this paper presents several key factors of B2B e-commerce framework by analyzing the role of frameworks.Moreover,this paper analyzeds and compares several international popular B2B frameworks from from the point of view of these factors.Finally,this paper proposes the design principles,objectives and e-commerce transaction language of cnXML (Chiese e-Commerce XML)frameworks.

  11. Wavelet representation of the nuclear dynamics

    International Nuclear Information System (INIS)

    Jouault, B.; Sebille, F.; Mota, V. de la.

    1997-01-01

    The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.)

  12. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  13. New publicly available chemical query language, CSRML, to support chemotype representations for application to data mining and modeling.

    Science.gov (United States)

    Yang, Chihae; Tarkhov, Aleksey; Marusczyk, Jörg; Bienfait, Bruno; Gasteiger, Johann; Kleinoeder, Thomas; Magdziarz, Tomasz; Sacher, Oliver; Schwab, Christof H; Schwoebel, Johannes; Terfloth, Lothar; Arvidson, Kirk; Richard, Ann; Worth, Andrew; Rathman, James

    2015-03-23

    Chemotypes are a new approach for representing molecules, chemical substructures and patterns, reaction rules, and reactions. Chemotypes are capable of integrating types of information beyond what is possible using current representation methods (e.g., SMARTS patterns) or reaction transformations (e.g., SMIRKS, reaction SMILES). Chemotypes are expressed in the XML-based Chemical Subgraphs and Reactions Markup Language (CSRML), and can be encoded not only with connectivity and topology but also with properties of atoms, bonds, electronic systems, or molecules. CSRML has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory, and commercial-use chemical space, as well as to represent chemical patterns and properties especially relevant to various toxicity concerns. A software application, ChemoTyper has also been developed and made publicly available in order to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML-based CSRML standard used to express chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge.

  14. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    Science.gov (United States)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  15. On spallation and fragmentation of heavy ions at intermediate energies

    International Nuclear Information System (INIS)

    Musulmanbekov, G.; Al-Haidary, A.

    2002-01-01

    A new code for simulation of spallation and (multi)fragmentation of nuclei in proton and nucleus induced collisions at intermediate and high energies is developed. The code is a combination of modified intranuclear cascade model with traditional fission - evaporation part and multifragmentation part based on lattice representation of nuclear structure and percolation approach. The production of s-wave resonances and formation time concept included into standard intranuclear cascade code provides correct calculation of excitation energy of residues. This modified cascade code served as a bridge between low and high energy model descriptions of nucleus-nucleus collisions. A good agreement with experiments has been obtained for multiparticle production at intermediate and relatively high energies. Nuclear structure of colliding nuclei is represented as face centered cubic lattice. This representation, being isomorphic to the shell model of nuclear structure, allows to apply percolation approach for nuclear fragmentation. The offered percolation model includes both site and bond percolation. Broken sites represent holes left by nucleons knocked out at cascade state. Therefore, in the first cascade stage mutual rescattering of the colliding nuclei results in knocking some nucleons out of them. After this fast stage paltrily destruct and excited residues remain. On the second stage residual nuclei either evaporate nucleons and light nuclei up to alpha-particles or fragment into pieces with intermediate masses. The choice depends on residue's destruction degree. At low excitation energy and small destruction of the residue the evaporation and fission mechanisms are preferable. The more excitation energy and destruction the more probability of (multi)fragmentation process. Moreover, the more destruction degree of the residual the more the site percolation probability. It is concluded, that at low and intermediate excitation energies the fragmentation of nuclei is slow

  16. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    Science.gov (United States)

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  17. Wavelet representation of the nuclear dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jouault, B.; Sebille, F.; Mota, V. de la

    1997-12-31

    The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.). 52 refs.

  18. An XML Approach of Coding a Morphological Database for Arabic Language

    OpenAIRE

    Gridach, Mourad; Chenfour, Noureddine

    2011-01-01

    We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA). Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP). For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a...

  19. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML Database – Xindice

    Directory of Open Access Journals (Sweden)

    Li Jianling

    2006-01-01

    Full Text Available Abstract Background Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. Results The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Conclusion Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  20. Defining the XML schema matching problem for a personal schema based query answering system

    NARCIS (Netherlands)

    Smiljanic, M.; van Keulen, Maurice; Jonker, Willem

    In this report, we analyze the problem of personal schema matching. We define the ingredients of the XML schema matching problem using constraint logic programming. This allows us to thourougly investigate specific matching problems. We do not have the ambition to provide for a formalism that covers

  1. An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters

    National Research Council Canada - National Science Library

    Stewart, Jeffrey

    2003-01-01

    .... The system uses XML technologies in server/client and stand alone environments. With a web browser, the user views and navigates the system's content from a downloaded file collection or from a centralized data source via a network connection...

  2. Report on the first Twente Data Management Workshop on XML Databases and Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Mihajlovic, V.

    2004-01-01

    The Database Group of the University of Twente initiated a new series of workshops called Twente Data Management workshops (TDM), starting with one on XML Databases and Information Retrieval which took place on 21 June 2004 at the University of Twente. We have set ourselves two goals for the

  3. 文件物件模型及其在XML文件處理之應用 Document Object Model and Its Application on XML Document Processing

    Directory of Open Access Journals (Sweden)

    Sinn-cheng Lin

    2001-06-01

    Full Text Available 無Document Object Model (DOM is an application-programming interface that can be applied to process XML documents. It defines the logical structure, the accessing interfaces and the operation methods for the document. In the DOM, an original document is mapped to a tree structure. Therefore ,the computer program can easily traverse the tree manipulate the nodes in the tree. In this paper, the fundamental models, definitions and specifications of DOM are surveyed. Then we create an experimenta1 system of DOM called XML On-Line Parser. The front-end of the system is built by the Web-based user interface for the XML document input and the parsed result output. On the other hand, the back-end of the system is built by an ASP program, which transforms the original document to DOM tree for document manipulation. This on-line system can be linked with a general-purpose web browser to check the well-formedness and the validity of the XML documents.

  4. QuakeML: status of the XML-based seismological data exchange format

    OpenAIRE

    Joachim Saul; Philipp Kästli; Fabian Euchner; Danijel Schorlemmer

    2011-01-01

    QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2) is based on a public Request for Comments process and accounts for suggestions and comments...

  5. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data.

    Science.gov (United States)

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-05-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.

  6. Region based route planning - Multi-abstraction route planning based on intermediate level vision processing

    Science.gov (United States)

    Doshi, Rajkumar S.; Lam, Raymond; White, James E.

    1989-01-01

    Intermediate and high level processing operations are performed on vision data for the organization of images into more meaningful, higher-level topological representations by means of a region-based route planner (RBRP). The RBRP operates in terrain scenarios where some or most of the terrain is occluded, proceeding without a priori maps on the basis of two-dimensional representations and gradient-and-roughness information. Route planning is accomplished by three successive abstractions and yields a detailed point-by-point path by searching only within the boundaries of relatively small regions.

  7. A general XML schema and SPM toolbox for storage of neuro-imaging results and anatomical labels.

    Science.gov (United States)

    Keator, David Bryant; Gadde, Syam; Grethe, Jeffrey S; Taylor, Derek V; Potkin, Steven G

    2006-01-01

    With the increased frequency of multisite, large-scale collaborative neuro-imaging studies, the need for a general, self-documenting framework for the storage and retrieval of activation maps and anatomical labels becomes evident. To address this need, we have developed and extensible markup language (XML) schema and associated tools for the storage of neuro-imaging activation maps and anatomical labels. This schema, as part of the XML-based Clinical Experiment Data Exchange (XCEDE) schema, provides storage capabilities for analysis annotations, activation threshold parameters, and cluster and voxel-level statistics. Activation parameters contain information describing the threshold, degrees of freedom, FWHM smoothness, search volumes, voxel sizes, expected voxels per cluster, and expected number of clusters in the statistical map. Cluster and voxel statistics can be stored along with the coordinates, threshold, and anatomical label information. Multiple threshold types can be documented for a given cluster or voxel along with the uncorrected and corrected probability values. Multiple atlases can be used to generate anatomical labels and stored for each significant voxel or cluter. Additionally, a toolbox for Statistical Parametric Mapping software (http://www. fil. ion.ucl.ac.uk/spm/) was created to capture the results from activation maps using the XML schema that supports both SPM99 and SPM2 versions (http://nbirn.net/Resources/Users/ Applications/xcede/SPM_XMLTools.htm). Support for anatomical labeling is available via the Talairach Daemon (http://ric.uthscsa. edu/projects/talairachdaemon.html) and Automated Anatomical Labeling (http://www. cyceron.fr/freeware/).

  8. A Process for the Representation of openEHR ADL Archetypes in OWL Ontologies.

    Science.gov (United States)

    Porn, Alex Mateus; Peres, Leticia Mara; Didonet Del Fabro, Marcos

    2015-01-01

    ADL is a formal language to express archetypes, independent of standards or domain. However, its specification is not precise enough in relation to the specialization and semantic of archetypes, presenting difficulties in implementation and a few available tools. Archetypes may be implemented using other languages such as XML or OWL, increasing integration with Semantic Web tools. Exchanging and transforming data can be better implemented with semantics oriented models, for example using OWL which is a language to define and instantiate Web ontologies defined by W3C. OWL permits defining significant, detailed, precise and consistent distinctions among classes, properties and relations by the user, ensuring the consistency of knowledge than using ADL techniques. This paper presents a process of an openEHR ADL archetypes representation in OWL ontologies. This process consists of ADL archetypes conversion in OWL ontologies and validation of OWL resultant ontologies using the mutation test.

  9. Relational Data Modelling of Textual Corpora: The Skaldic Project and its Extensions

    DEFF Research Database (Denmark)

    Wills, Tarrin Jon

    2015-01-01

    Skaldic poetry is a highly complex textual phenomenon both in terms of the intricacy of the poetry and its contextual environment. Extensible Markup Language (XML) applications such as that of the Text Encoding Initiative provide a means of semantic representation of some of these complexities. XML...

  10. Upgrading a TCABR Data Analysis and Acquisition System for Remote Participation Using Java, XML, RCP and Modern Client/Server Communication/Authentication

    Energy Technology Data Exchange (ETDEWEB)

    De Sa, W. [University of Sao Paulo - Institute of Physics - Plasma Physics Laboratory, Sao Paulo (Brazil)

    2009-07-01

    Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way of control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and it is difficult to follow the work remotely with private system. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are very complex and with a large variability of components, requirement and specification solutions need to be flexible and modular, independent from operating system and computers architecture. To describe and to organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers use Remote Procedure Call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows developing easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration and the Web application allows a simple Graphical User Interface (GUI) access. TCABR tokamak team collaborating with the CFN (Nuclear Fusion Center, Technical University of Lisbon) are implementing this Remote Participation technologies that are going to be tested at the Joint Experiment on TCABR (TCABR-JE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on

  11. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  12. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  13. Version control of pathway models using XML patches.

    Science.gov (United States)

    Saffrey, Peter; Orton, Richard

    2009-03-17

    Computational modelling has become an important tool in understanding biological systems such as signalling pathways. With an increase in size complexity of models comes a need for techniques to manage model versions and their relationship to one another. Model version control for pathway models shares some of the features of software version control but has a number of differences that warrant a specific solution. We present a model version control method, along with a prototype implementation, based on XML patches. We show its application to the EGF/RAS/RAF pathway. Our method allows quick and convenient storage of a wide range of model variations and enables a thorough explanation of these variations. Trying to produce these results without such methods results in slow and cumbersome development that is prone to frustration and human error.

  14. XML Storage for Magnetotelluric Transfer Functions: Towards a Comprehensive Online Reference Database

    Science.gov (United States)

    Kelbert, A.; Blum, C.

    2015-12-01

    Magnetotelluric Transfer Functions (MT TFs) represent most of the information about Earth electrical conductivity found in the raw electromagnetic data, providing inputs for further inversion and interpretation. To be useful for scientific interpretation, they must also contain carefully recorded metadata. Making these data available in a discoverable and citable fashion would provide the most benefit to the scientific community, but such a development requires that the metadata is not only present in the file but is also searchable. The most commonly used MT TF format to date, the historical Society of Exploration Geophysicists Electromagnetic Data Interchange Standard 1987 (EDI), no longer supports some of the needs of modern magnetotellurics, most notably accurate error bars recording. Moreover, the inherent heterogeneity of EDI's and other historic MT TF formats has mostly kept the community away from healthy data sharing practices. Recently, the MT team at Oregon State University in collaboration with IRIS Data Management Center developed a new, XML-based format for MT transfer functions, and an online system for long-term storage, discovery and sharing of MT TF data worldwide (IRIS SPUD; www.iris.edu/spud/emtf). The system provides a query page where all of the MT transfer functions collected within the USArray MT experiment and other field campaigns can be searched for and downloaded; an automatic on-the-fly conversion to the historic EDI format is also included. To facilitate conversion to the new, more comprehensive and sustainable, XML format for MT TFs, and to streamline inclusion of historic data into the online database, we developed a set of open source format conversion tools, which can be used for rotation of MT TFs as well as a general XML EDI converter (https://seiscode.iris.washington.edu/projects/emtf-fcu). Here, we report on the newly established collaboration between the USGS Geomagnetism Program and the Oregon State University to gather and

  15. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  16. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  17. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  18. XSAMS: XML Schema for Atoms, Molecules and Solids. Summary report of an IAEA Consultants' Meeting

    Energy Technology Data Exchange (ETDEWEB)

    Braams, B J [International Atomic Energy Agency, Vienna (Austria)

    2010-05-15

    Experts on atomic and molecular data and their exchange met at National Institute for Fusion Science, Toki-City, Japan, to review progress in the implementation of XSAMS, the XML Schema for Atoms, Molecules and Solids, and to discuss further development of the Schema. The proceedings of the meeting are summarized here. (author)

  19. XSAMS: XML Schema for Atoms, Molecules and Solids. Summary report of an IAEA Consultants' Meeting

    International Nuclear Information System (INIS)

    Braams, B.J.

    2010-05-01

    Experts on atomic and molecular data and their exchange met at National Institute for Fusion Science, Toki-City, Japan, to review progress in the implementation of XSAMS, the XML Schema for Atoms, Molecules and Solids, and to discuss further development of the Schema. The proceedings of the meeting are summarized here. (author)

  20. Distribution of immunodeficiency fact files with XML – from Web to WAP

    Directory of Open Access Journals (Sweden)

    Riikonen Pentti

    2005-06-01

    Full Text Available Abstract Background Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Methods Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML, that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. Results IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. Conclusion The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.

  1. HepML, an XML-based format for describing simulated data in high energy physics

    Science.gov (United States)

    Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.

    2010-10-01

    In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high

  2. Applying XML-Based Technologies to Developing Online Courses: The Case of a Prototype Learning Environment

    Science.gov (United States)

    Jedrzejowicz, Joanna; Neumann, Jakub

    2007-01-01

    Purpose: This paper seeks to describe XML technologies and to show how they can be applied for developing web-based courses and supporting authors who do not have much experience with the preparation of web-based courses. Design/methodology/approach: When developing online courses the academic staff has to address the following problem--how to…

  3. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  4. The representation of knowledge within model-based control systems

    International Nuclear Information System (INIS)

    Weygand, D.P.; Koul, R.

    1987-01-01

    The ability to represent knowledge is often considered essential to build systems with reasoning capabilities. In computer science, a good solution often depends on a good representation. The first step in development of most computer applications is selection of a representation for the input, output, and intermediate results that the program will operate upon. For applications in artificial intelligence, this initial choice of representation is especially important. This is because the possible representational paradigms are diverse and the forcing criteria for the choice are usually not clear in the beginning. Yet, the consequences of an inadequate choice can be devastating in the later state of a project if it is discovered that critical information cannot be encoded within the chosen representational paradigm. Problems arise when designing representational systems to support any kind of Knowledge-Base System, that is a computer system that uses knowledge to perform some task. The general case of knowledge-based systems can be thought of as reasoning agents applying knowledge to achieve goals. Artificial Intelligence (AI) research involves building computer systems to perform tasks of perception and reasoning, as well as storage and retrieval of data. The problem of automatically perceiving large patterns in data is a perceptual task that begins to be important for many expert systems applications. Most of AI research assumes that what needs to be represented is known a priori; an AI researcher's job is just figuring out how to encode the information in the system's data structure and procedures. 10 refs

  5. Encoding of coordination complexes with XML.

    Science.gov (United States)

    Vinoth, P; Sankar, P

    2017-09-01

    An in-silico system to encode structure, bonding and properties of coordination complexes is developed. The encoding is achieved through a semantic XML markup frame. Composition of the coordination complexes is captured in terms of central atom and ligands. Structural information of central atom is detailed in terms of electron status of valence electron orbitals. The ligands are encoded with specific reference to the electron environment of ligand centre atoms. Behaviour of ligands to form low or high spin complexes is accomplished by assigning a Ligand Centre Value to every ligand based on the electronic environment of ligand centre atom. Chemical ontologies are used for categorization purpose and to control different hybridization schemes. Complexes formed by the central atoms of transition metal, non-transition elements belonging to s-block, p-block and f-block are encoded with a generic encoding platform. Complexes of homoleptic, heteroleptic and bridged types are also covered by this encoding system. Utility of the encoded system to predict redox electron transfer reaction in the coordination complexes is demonstrated with a simple application. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. The MHD intermediate shock interaction with an intermediate wave: Are intermediate shocks physical?

    International Nuclear Information System (INIS)

    Wu, C.C.

    1988-01-01

    Contrary to the usual belief that MHD intermediate shocks are extraneous, the authors have recently shown by numerical solutions of dissipative MHD equations that intermediate shocks are admissible and can be formed through nonlinear steepening from a continuous wave. In this paper, he clarifies the differences between the conventional view and the results by studying the interaction of an MHD intermediate shock with an intermediate wave. The study reaffirms his results. In addition, the study shows that there exists a larger class of shocklike solutions in the time-dependent dissiaptive MHD equations than are given by the MHD Rankine-Hugoniot relations. it also suggests a mechanism for forming rotational discontinuities through the interaction of an intermediate shock with an intermediate wave. The results are of importance not only to the MHD shock theory but also to studies such as magnetic field reconnection models

  7. An XML Approach of Coding a Morphological Database for Arabic Language

    Directory of Open Access Journals (Sweden)

    Mourad Gridach

    2011-01-01

    Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.

  8. Using XML and Java Technologies for Astronomical Instrument Control

    Science.gov (United States)

    Ames, Troy; Case, Lynne; Powers, Edward I. (Technical Monitor)

    2001-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center, under the Instrument Remote Control (IRC) project, is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is that the software is driven by an instrument description, written using the Instrument Markup Language (IML), a dialect of XML. IML is used to describe the command sets and command formats of the instrument, communication mechanisms, format of the data coming from the instrument, and characteristics of the graphical user interface to control and monitor the instrument. The IRC framework allows the users to define a data analysis pipeline which converts data coming out of the instrument. The data can be used in visualizations in order for the user to assess the data in real-time, if necessary. The data analysis pipeline algorithms can be supplied by the user in a variety of forms or programming languages. Although the current integration effort is targeted for the High-resolution Airborne Wideband Camera (HAWC) and the Submillimeter and Far Infrared Experiment (SAFIRE), first-light instruments of the Stratospheric Observatory for Infrared Astronomy (SOFIA), the framework is designed to be generic and extensible so that it can be applied to any instrument. Plans are underway to test the framework

  9. Comparing FrameMaker and Quicksilver as Tools for Producing Single Sourced Content from XML

    OpenAIRE

    HUHTAMÄKI, HENRI

    2006-01-01

    Tutkimuksen tarkoituksena on vertailla kahta yleisesti teknisen dokumentaation tuottamiseen tarkoittettua ohjelmaa yksilähteistämisen näkökulmasta, kun lähdemateriaali on XML-muodossa: Adobe FrameMakeria ja Broadvision Quicksilveriä. Tarkoituksena on antaa teknisille kirjoittajille ja tekniseen viestintään erikoistuneille yrityksille tarpeeksi tietoa, jotta he osaisivat valita oikean työkalun omiin tarkoitusperiinsä. Työkalut testataan tutkimuskentän rajoittamiseksi sellaisina kokonaisuuksina...

  10. Analysis of RDF Syntaxes for Semantic Web Development

    Directory of Open Access Journals (Sweden)

    Gryaznov Yevgeny

    2015-12-01

    Full Text Available In this paper authors perform a research on possibilities of RDF (Resource Description Framework syntaxes usage for information representation in Semantic Web. It is described why pure XML cannot be effectively used for this purpose, and how RDF framework solves this problem. Information is being represented in a form of a directed graph. RDF is only an abstract formal model for information representation and side tools are required in order to write down that information. Such tools are RDF syntaxes – concrete text or binary formats, which prescribe rules for RDF data serialization. Text-based RDF syntaxes can be developed on the existing format basis (XML, JSON or can be an RDF-specific – designed from scratch to serve the only purpose – to serialize RDF graphs. Authors briefly describe some of the RDF syntaxes (both XML and non-XML and compare them in order to identify strengths and weaknesses of each version. Serialization and deserialization speed tests using Jena library are made. The results from both analytical and experimental parts of this research are used to develop the recommendations for RDF syntaxes usage and to design a RDF/XML syntax subset, which is intended to simplify the development and raise compatibility of information serialized with this RDF syntax.

  11. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    Science.gov (United States)

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  12. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs).

    Science.gov (United States)

    Wei, Tiandi; Gong, Jing; Jamitzky, Ferdinand; Heckl, Wolfgang M; Stark, Robert W; Rössle, Shaila C

    2008-11-05

    Leucine-rich repeats (LRRs) are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. We developed LRRML, a conformational database and an extensible markup language (XML) description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3) by multiple templates homology modeling and compared the result with the crystal structure. LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  13. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs

    Directory of Open Access Journals (Sweden)

    Stark Robert W

    2008-11-01

    Full Text Available Abstract Background Leucine-rich repeats (LRRs are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. Description We developed LRRML, a conformational database and an extensible markup language (XML description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3 by multiple templates homology modeling and compared the result with the crystal structure. Conclusion LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  14. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group

    2007-12-01

    QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org.

  15. Jeden model, různé zájmy. Konvergence a divergence intermediálních studií

    Czech Academy of Sciences Publication Activity Database

    Jedličková, Alice

    2017-01-01

    Roč. 20, č. 1 (2017), s. 98-125 ISSN 1213-2144 R&D Projects: GA ČR(CZ) GA16-11101S Institutional support: RVO:68378068 Keywords : intermediality * transmediation * media representation Subject RIV: AJ - Letters, Mass-media, Audiovision OBOR OECD: Specific literatures

  16. Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam

    International Nuclear Information System (INIS)

    Asiev, Krum; Mullins, Joel; DeBlois, François; Liang, Liheng; Syme, Alasdair

    2014-01-01

    Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. A manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm

  17. Using Web Services and XML Harvesting to Achieve a Dynamic Web Site. Computers in Small Libraries

    Science.gov (United States)

    Roberts, Gary

    2005-01-01

    Exploiting and contextualizing free information is a natural part of library culture. In this column, Gary Roberts, the information systems and reference librarian at Herrick Library, Alfred University in Alfred, NY, describes how to use XML content on a Web site to link to hundreds of free and useful resources. He gives a general overview of the…

  18. An XML transfer schema for exchange of genomic and genetic mapping data: implementation as a web service in a Taverna workflow.

    Science.gov (United States)

    Paterson, Trevor; Law, Andy

    2009-08-14

    Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. We have developed a simple generic XML schema (GenomicMappingData.xsd - GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. The data

  19. 77 FR 28541 - Request for Comments on the Recommendation for the Disclosure of Sequence Listings Using XML...

    Science.gov (United States)

    2012-05-15

    ... the sequence part of the standard, and a second annex setting forth the Document Type Definition (DTD) for the standard. Five rounds of comment/revision have taken place since March 2011, and discussion of... patent data purposes. The XML standard also includes four qualifiers for amino acids. These feature keys...

  20. XML-based information system for planetary sciences

    Science.gov (United States)

    Carraro, F.; Fonte, S.; Turrini, D.

    2009-04-01

    EuroPlaNet (EPN in the following) has been developed by the planetological community under the "Sixth Framework Programme" (FP6 in the following), the European programme devoted to the improvement of the European research efforts through the creation of an internal market for science and technology. The goal of the EPN programme is the creation of a European network aimed to the diffusion of data produced by space missions dedicated to the study of the Solar System. A special place within the EPN programme is that of I.D.I.S. (Integrated and Distributed Information Service). The main goal of IDIS is to offer to the planetary science community a user-friendly access to the data and information produced by the various types of research activities, i.e. Earth-based observations, space observations, modeling, theory and laboratory experiments. During the FP6 programme IDIS development consisted in the creation of a series of thematic nodes, each of them specialized in a specific scientific domain, and a technical coordination node. The four thematic nodes are the Atmosphere node, the Plasma node, the Interiors & Surfaces node and the Small Bodies & Dust node. The main task of the nodes have been the building up of selected scientific cases related with the scientific domain of each node. The second work done by EPN nodes have been the creation of a catalogue of resources related to their main scientific theme. Both these efforts have been used as the basis for the development of the main IDIS goal, i.e. the integrated distributed service. An XML-based data model have been developed to describe resources using meta-data and to store the meta-data within an XML-based database called eXist. A search engine has been then developed in order to allow users to search resources within the database. Users can select the resource type and can insert one or more values or can choose a value among those present in a list, depending on selected resource. The system searches for all

  1. THE POSSIBILITIES FOR THE CREATION OF A LANGUAGE XML FOR THE FORMALIZATION OF THE ACCOUNTING RECORDS

    Directory of Open Access Journals (Sweden)

    Aurora Popescu

    2008-12-01

    Full Text Available During the nineties the main trend in the development of the applications was the supply of support and accessibility for the computers connected on the internet to a wide range of informational resources (data basis, applications. A witness in this are the numerous languages and technologies which permit an easy development of the applications for the processing of data bases with a simple web browser as, for example, the script languages ASP, PHP, JSP etc. Many changes took place in the last years regarding the informational needs or the equipments used by different users. So, today not only the computers are connected on the internet, but also a wide range of equipments as mobile phones and many home utility devices. As a result of these needs, it became an imperative necessity the conception of an universal language that be understood by all these diverse equipments. XML is the answer to this requirement, this language representing a new step in the development of the informational epoch. XML appeared as a consequence of the limits of the HTML (the language of the web pages, this last one being incapable to use data for other applications.

  2. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    Science.gov (United States)

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  3. An XML-Based Manipulation and Query Language for Rule-Based Information

    Science.gov (United States)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  4. Electron capture in ion-molecule collisions at intermediate energy

    International Nuclear Information System (INIS)

    Kumura, M.

    1986-01-01

    Recent progress of theoretical charge transfer study in ion-molecule collisions at the intermediate energy is reviewed. Concept of close and distant collisions obtained from extensive ion-atom collision studies is identified so that it can be utilized to model two distinct collision processes. For a close collision, explicit representation of the whole collision complex is necessary to describe collision dynamics correctly, while a model potential approach for molecule is appropriate for a distant collision. It is shown that these two distinct models are indeed capable of reproducing experimental charge transfer cross sections. Some remarks for further theoretical study of ion-molecule collisions are also given. 21 refs., 8 figs

  5. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  6. Gating-ML: XML-based gating descriptions in flow cytometry.

    Science.gov (United States)

    Spidlen, Josef; Leif, Robert C; Moore, Wayne; Roederer, Mario; Brinkman, Ryan R

    2008-12-01

    The lack of software interoperability with respect to gating due to lack of a standardized mechanism for data exchange has traditionally been a bottleneck, preventing reproducibility of flow cytometry (FCM) data analysis and the usage of multiple analytical tools. To facilitate interoperability among FCM data analysis tools, members of the International Society for the Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) have developed an XML-based mechanism to formally describe gates (Gating-ML). Gating-ML, an open specification for encoding gating, data transformations and compensation, has been adopted by the ISAC DSTF as a Candidate Recommendation. Gating-ML can facilitate exchange of gating descriptions the same way that FCS facilitated for exchange of raw FCM data. Its adoption will open new collaborative opportunities as well as possibilities for advanced analyses and methods development. The ISAC DSTF is satisfied that the standard addresses the requirements for a gating exchange standard.

  7. The tissue microarray data exchange specification: A document type definition to validate and enhance XML data

    Science.gov (United States)

    Nohle, David G; Ayers, Leona W

    2005-01-01

    Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be

  8. XSAMS: XML schema for atomic and molecular data and particle solid interaction. Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.

    2009-02-01

    Advanced developments in computer technologies offer exciting opportunities for new distribution tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the A and M Data Unit initiated within the collaborative efforts of the DCN (Data Centres Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM/PSI) based on XML (eXtensible Markup Language). The schema is named XSAMS which stands for 'XML Schema for Atoms Molecules and Solids'. A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon meets biannually to discuss progress made on XSAMS, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. Such a meeting was held on 27 October 2008, and the discussions and progress made in the schema are considered within this report. (author)

  9. XSAMS: XML schema for atomic and molecular data and particle solid interactions. Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.; Braams, B.J.

    2010-01-01

    Developments in computer technology offer exciting new opportunities for the reliable and convenient exchange of data. Therefore, in 2003 the Atomic and Molecular Data Unit initiated within the collaborative efforts of the A+M Data Centres Network a new standard for exchange of atomic, molecular and particle-solid interaction (AM/PSI) data based on the Extended Markup Language (XML). The standard is named XSAMS, which stands for XML Schema for Atoms, Molecules, and Solids. A working group composed of staff from the IAEA, NIST, ORNL, Observatoire Paris-Meudon and other institutions meets approximately biannually to discuss progress made on XSAMS, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. Such a meeting was held 10-11 September 2009 at IAEA Headquarters, Vienna, and the discussions and results of the meeting are presented here. The principal concern of the meeting was the preparation of the first public release, version 0.1, of XSAMS. (author)

  10. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  11. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    Energy Technology Data Exchange (ETDEWEB)

    Roe, S A, E-mail: shaun.roe@cern.c [CERN, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  12. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  13. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    CERN Document Server

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Sem...

  14. A hierarachical data structure representation for fusing multisensor information

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. [Tennessee Univ., Tullahoma, TN (United States). Space Inst.; Pap, R.M.; Harston, C.T. [Accurate Automation Corp., Chattanooga, TN (United States)

    1989-12-31

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  15. A hierarachical data structure representation for fusing multisensor information

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. (Tennessee Univ., Tullahoma, TN (United States). Space Inst.); Pap, R.M.; Harston, C.T. (Accurate Automation Corp., Chattanooga, TN (United States))

    1989-01-01

    A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

  16. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    International Nuclear Information System (INIS)

    Nguyen, N; Knutson, N; Schmidt, M; Price, M

    2016-01-01

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, a method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.

  17. Model-Driven Engineering: Automatic Code Generation and Beyond

    Science.gov (United States)

    2015-03-01

    herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer , or otherwise, does not necessarily constitute or...export of an Extensible Markup Language (XML) representation of the model. The XML Metadata Interchange (XMI) is an OMG standard for representing...overall company financial results for the past 3 years. What financial re- sults are you projecting for the next year? 1.2.5.2 Percentage of Gross

  18. StreetTiVo: Using a P2P XML Database System to Manage Multimedia Data in Your Living Room

    NARCIS (Netherlands)

    Zhang, Ying; de Vries, A.P.; Boncz, P.; Hiemstra, Djoerd; Ordelman, Roeland J.F.; Li, Qing; Feng, Ling; Pei, Jian; Wang, Sean X.

    StreetTiVo is a project that aims at bringing research results into the living room; in particular, a mix of current results in the areas of Peer-to-Peer XML Database Management System (P2P XDBMS), advanced multimedia analysis techniques, and advanced information re- trieval techniques. The project

  19. MASCOT HTML and XML parser: an implementation of a novel object model for protein identification data.

    Science.gov (United States)

    Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L

    2006-11-01

    Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.

  20. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  1. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    International Nuclear Information System (INIS)

    Arcidiacono, R.; Brigljevic, V.; Bruno, G.; Cano, E.; Cittolin, S.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Gulmini, M.; Gutleber, J.; Jacobs, C.; Kreuzer, P.; Lo Presti, G.; Magrans, I.; Marinelli, N.; Maron, G.; Meijers, F.; Meschi, E.; Murray, S.; Nafria, M.; Oh, A.; Orsini, L.; Pieri, M.; Pollet, L.; Racz, A.; Rosinsky, P.; Schwick, C.; Sphicas, P.; Varela, J.

    2005-01-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface

  2. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    Energy Technology Data Exchange (ETDEWEB)

    Arcidiacono, R. [Massachusetts Institute of Technology, Cambridge, MA (United States); Brigljevic, V. [CERN, Geneva (Switzerland); Rudjer Boskovic Institute, Zagreb (Croatia); Bruno, G. [CERN, Geneva (Switzerland); Cano, E. [CERN, Geneva (Switzerland); Cittolin, S. [CERN, Geneva (Switzerland); Erhan, S. [University of California, Los Angeles, Los Angeles, CA (United States); Gigi, D. [CERN, Geneva (Switzerland); Glege, F. [CERN, Geneva (Switzerland); Gomez-Reino, R. [CERN, Geneva (Switzerland); Gulmini, M. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); CERN, Geneva (Switzerland); Gutleber, J. [CERN, Geneva (Switzerland); Jacobs, C. [CERN, Geneva (Switzerland); Kreuzer, P. [University of Athens, Athens (Greece); Lo Presti, G. [CERN, Geneva (Switzerland); Magrans, I. [CERN, Geneva (Switzerland) and Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain)]. E-mail: ildefons.magrans@cern.ch; Marinelli, N. [Institute of Accelerating Systems and Applications, Athens (Greece); Maron, G. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); Meijers, F. [CERN, Geneva (Switzerland); Meschi, E. [CERN, Geneva (Switzerland); Murray, S. [CERN, Geneva (Switzerland); Nafria, M. [Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain); Oh, A. [CERN, Geneva (Switzerland); Orsini, L. [CERN, Geneva (Switzerland); Pieri, M. [University of California, San Diago, San Diago, CA (United States); Pollet, L. [CERN, Geneva (Switzerland); Racz, A. [CERN, Geneva (Switzerland); Rosinsky, P. [CERN, Geneva (Switzerland); Schwick, C. [CERN, Geneva (Switzerland); Sphicas, P. [University of Athens, Athens (Greece); CERN, Geneva (Switzerland); Varela, J. [LIP, Lisbon (Portugal); CERN, Geneva (Switzerland)

    2005-07-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.

  3. An XML transfer schema for exchange of genomic and genetic mapping data: implementation as a web service in a Taverna workflow

    Directory of Open Access Journals (Sweden)

    Law Andy

    2009-08-01

    Full Text Available Abstract Background Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. Results We have developed a simple generic XML schema (GenomicMappingData.xsd – GMD to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps. It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.. The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of

  4. Sustainable thorium nuclear fuel cycles: A comparison of intermediate and fast neutron spectrum systems

    International Nuclear Information System (INIS)

    Brown, N.R.; Powers, J.J.; Feng, B.; Heidet, F.; Stauff, N.E.; Zhang, G.; Todosow, M.; Worrall, A.; Gehin, J.C.; Kim, T.K.; Taiwo, T.A.

    2015-01-01

    Highlights: • Comparison of intermediate and fast spectrum thorium-fueled reactors. • Variety of reactor technology options enables self-sustaining thorium fuel cycles. • Fuel cycle analyses indicate similar performance for fast and intermediate systems. • Reproduction factor plays a significant role in breeding and burn-up performance. - Abstract: This paper presents analyses of possible reactor representations of a nuclear fuel cycle with continuous recycling of thorium and produced uranium (mostly U-233) with thorium-only feed. The analysis was performed in the context of a U.S. Department of Energy effort to develop a compendium of informative nuclear fuel cycle performance data. The objective of this paper is to determine whether intermediate spectrum systems, having a majority of fission events occurring with incident neutron energies between 1 eV and 10 5 eV, perform as well as fast spectrum systems in this fuel cycle. The intermediate spectrum options analyzed include tight lattice heavy or light water-cooled reactors, continuously refueled molten salt reactors, and a sodium-cooled reactor with hydride fuel. All options were modeled in reactor physics codes to calculate their lattice physics, spectrum characteristics, and fuel compositions over time. Based on these results, detailed metrics were calculated to compare the fuel cycle performance. These metrics include waste management and resource utilization, and are binned to accommodate uncertainties. The performance of the intermediate systems for this self-sustaining thorium fuel cycle was similar to a representative fast spectrum system. However, the number of fission neutrons emitted per neutron absorbed limits performance in intermediate spectrum systems

  5. Sustainable thorium nuclear fuel cycles: A comparison of intermediate and fast neutron spectrum systems

    Energy Technology Data Exchange (ETDEWEB)

    Brown, N.R., E-mail: nbrown@bnl.gov [Brookhaven National Laboratory, Upton, NY (United States); Powers, J.J. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Feng, B.; Heidet, F.; Stauff, N.E.; Zhang, G. [Argonne National Laboratory, Argonne, IL (United States); Todosow, M. [Brookhaven National Laboratory, Upton, NY (United States); Worrall, A.; Gehin, J.C. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Kim, T.K.; Taiwo, T.A. [Argonne National Laboratory, Argonne, IL (United States)

    2015-08-15

    Highlights: • Comparison of intermediate and fast spectrum thorium-fueled reactors. • Variety of reactor technology options enables self-sustaining thorium fuel cycles. • Fuel cycle analyses indicate similar performance for fast and intermediate systems. • Reproduction factor plays a significant role in breeding and burn-up performance. - Abstract: This paper presents analyses of possible reactor representations of a nuclear fuel cycle with continuous recycling of thorium and produced uranium (mostly U-233) with thorium-only feed. The analysis was performed in the context of a U.S. Department of Energy effort to develop a compendium of informative nuclear fuel cycle performance data. The objective of this paper is to determine whether intermediate spectrum systems, having a majority of fission events occurring with incident neutron energies between 1 eV and 10{sup 5} eV, perform as well as fast spectrum systems in this fuel cycle. The intermediate spectrum options analyzed include tight lattice heavy or light water-cooled reactors, continuously refueled molten salt reactors, and a sodium-cooled reactor with hydride fuel. All options were modeled in reactor physics codes to calculate their lattice physics, spectrum characteristics, and fuel compositions over time. Based on these results, detailed metrics were calculated to compare the fuel cycle performance. These metrics include waste management and resource utilization, and are binned to accommodate uncertainties. The performance of the intermediate systems for this self-sustaining thorium fuel cycle was similar to a representative fast spectrum system. However, the number of fission neutrons emitted per neutron absorbed limits performance in intermediate spectrum systems.

  6. Value of XML in the implementation of clinical practice guidelines--the issue of content retrieval and presentation.

    Science.gov (United States)

    Hoelzer, S; Schweiger, R K; Boettcher, H A; Tafazzoli, A G; Dudeck, J

    2001-01-01

    that preserves the original cohesiveness. The lack of structure limits the automatic identification and extraction of the information contained in these resources. For this reason, we have chosen a document-based approach using eXtensible Markup Language (XML) with its schema definition and related technologies. XML empowers the applications for in-context searching. In addition it allows the same content to be represented in different ways. Our XML reference clinical data model for guidelines has been realized with the XML schema definition. The schema is used for structuring new text-based guidelines and updating existing documents. It is also used to establish search strategies on the document base. We hypothesize that enabling the physicians to query the available CPGs easily, and to get access to selected and specific information at the point of care will foster increased use. Based on current evidence we are confident that it will have substantial impact on the care provided, and will improve health outcomes.

  7. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  8. An enhanced security solution for electronic medical records based on AES hybrid technique with SOAP/XML and SHA-1.

    Science.gov (United States)

    Kiah, M L Mat; Nabi, Mohamed S; Zaidan, B B; Zaidan, A A

    2013-10-01

    This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.

  9. Tactical Web Services: Using XML and Java Web Services to Conduct Real-Time Net-Centric Sonar Visualization

    Science.gov (United States)

    2004-09-01

    Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN

  10. Using XML and Java for Astronomical Instrumentation Control

    Science.gov (United States)

    Ames, Troy; Koons, Lisa; Sall, Ken; Warsaw, Craig

    2000-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). ]ML is used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, and communication mechanisms. Although the current effort is targeted for the High-resolution Airborne Wideband Camera, a first-light instrument of the Stratospheric Observatory for Infrared Astronomy, the framework is designed to be generic and extensible so that it can be applied to any instrument.

  11. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  12. MeMo: a hybrid SQL/XML approach to metabolomic data management for functional genomics

    Directory of Open Access Journals (Sweden)

    Hardy Nigel

    2006-06-01

    Full Text Available Abstract Background The genome sequencing projects have shown our limited knowledge regarding gene function, e.g. S. cerevisiae has 5–6,000 genes of which nearly 1,000 have an uncertain function. Their gross influence on the behaviour of the cell can be observed using large-scale metabolomic studies. The metabolomic data produced need to be structured and annotated in a machine-usable form to facilitate the exploration of the hidden links between the genes and their functions. Description MeMo is a formal model for representing metabolomic data and the associated metadata. Two predominant platforms (SQL and XML are used to encode the model. MeMo has been implemented as a relational database using a hybrid approach combining the advantages of the two technologies. It represents a practical solution for handling the sheer volume and complexity of the metabolomic data effectively and efficiently. The MeMo model and the associated software are available at http://dbkgroup.org/memo/. Conclusion The maturity of relational database technology is used to support efficient data processing. The scalability and self-descriptiveness of XML are used to simplify the relational schema and facilitate the extensibility of the model necessitated by the creation of new experimental techniques. Special consideration is given to data integration issues as part of the systems biology agenda. MeMo has been physically integrated and cross-linked to related metabolomic and genomic databases. Semantic integration with other relevant databases has been supported through ontological annotation. Compatibility with other data formats is supported by automatic conversion.

  13. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  14. E-Learning – Using XML technologies to meet the special characteristics of higher education

    Directory of Open Access Journals (Sweden)

    Igor Kanovsky

    2004-02-01

    Full Text Available In this paper we claim that the current approach to learning objects and metadata standards is counter productive for the integration of e-learning in higher education. We explain why higher education is different with regard to E-learning and we suggest an approach that avoids the use of global standards and favors an approach of an evolving set of metadata tags for an evolving community of practice. We demonstrate how XML technologies and some minimal technical help for the participating teachers can provide the required foundation for a productive process of integrating E-learning in higher education.

  15. Applying GRID Technologies to XML Based OLAP Cube Construction

    CERN Document Server

    Niemi, Tapio Petteri; Nummenmaa, J; Thanisch, P

    2002-01-01

    On-Line Analytical Processing (OLAP) is a powerful method for analysing large data warehouse data. Typically, the data for an OLAP database is collected from a set of data repositories such as e.g. operational databases. This data set is often huge, and it may not be known in advance what data is required and when to perform the desired data analysis tasks. Sometimes it may happen that some parts of the data are only needed occasionally. Therefore, storing all data to the OLAP database and keeping this database constantly up-to-date is not only a highly demanding task but it also may be overkill in practice. This suggests that in some applications it would be more feasible to form the OLAP cubes only when they are actually needed. However, the OLAP cube construction can be a slow process. Thus, we present a system that applies Grid technologies to distribute the computation. As the data sources may well be heterogeneous, we propose an XML language for data collection. The user's definition for a OLAP new cube...

  16. XML Schema for Atoms, Molecules and Solids (XSAMS). Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Braams, B.J.

    2011-12-01

    A Consultants' Meeting on 'XML Schema for Atoms, Molecules and Solids (XSAMS)' was held at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD, United States of America, 3-5 October 2011. Objectives of the meeting were to review and discuss developments of the Schema made during 2011 in connection with implementations on databases associated with the Virtual Atomic and Molecular Data Centre (VAMDC) and to agree on the adoption of an international standard XSAMS version 1.0. The proceedings of the meeting are summarized here. (author)

  17. Intermediate treatments

    Science.gov (United States)

    John R. Jones; Wayne D. Shepperd

    1985-01-01

    Intermediate treatments are those applied after a new stand is successfully established and before the final harvest. These include not only intermediate cuttings - primarily thinning - but also fertilization, irrigation, and protection of the stand from damaging agents.

  18. Study of Tachyon Warm Intermediate and Logamediate Inflationary Universe from Loop Quantum Cosmological Perspective

    International Nuclear Information System (INIS)

    Mandal, Jyotirmay Das; Debnath, Ujjal

    2016-01-01

    We have studied the tachyon intermediate and logamediate warm inflation in loop quantum cosmological background by taking the dissipative co-efficient Γ = Γ 0 (where Γ 0 is a constant) in “intermediate” inflation and Γ = V(ϕ), (where V(ϕ) is the potential of tachyonic field) in “logamediate” inflation. We have assumed slow-roll condition to construct scalar field ϕ, potential V, N-folds, etc. Various slow-roll parameters have also been obtained. We have analyzed the stability of this model through graphical representations. (paper)

  19. On Logic and Standards for Structuring Documents

    Science.gov (United States)

    Eyers, David M.; Jones, Andrew J. I.; Kimbrough, Steven O.

    The advent of XML has been widely seized upon as an opportunity to develop document representation standards that lend themselves to automated processing. This is a welcome development and much good has come of it. That said, present standardization efforts may be criticized on a number of counts. We explore two issues associated with document XML standardization efforts. We label them (i) the dynamic point and (ii) the logical point. Our dynamic point is that in many cases experience has shown that the search for a final, or even reasonably permanent, document representation standard is futile. The case is especially strong for electronic data interchange (EDI). Our logical point is that formalization into symbolic logic is materially helpful for understanding and designing dynamic document standards.

  20. Chemical markup, XML, and the world wide web. 6. CMLReact, an XML vocabulary for chemical reactions.

    Science.gov (United States)

    Holliday, Gemma L; Murray-Rust, Peter; Rzepa, Henry S

    2006-01-01

    A set of components (CMLReact) for managing chemical and biochemical reactions has been added to CML. These can be combined to support most of the strategies for the formal representation of reactions. The elements, attributes, and types are formally defined as XMLSchema components, and their semantics are developed. New syntax and semantics in CML are reported and illustrated with 10 examples.

  1. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    Science.gov (United States)

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. An XML based middleware for ECG format conversion.

    Science.gov (United States)

    Li, Xuchen; Vojisavljevic, Vuk; Fang, Qiang

    2009-01-01

    With the rapid development of information and communication technologies, various e-health solutions have been proposed. The digitized medical images as well as the mono-dimension medical signals are two major forms of medical information that are stored and manipulated within an electronic medical environment. Though a variety of industrial and international standards such as DICOM and HL7 have been proposed, many proprietary formats are still pervasively used by many Hospital Information System (HIS) and Picture Archiving and Communication System (PACS) vendors. Those proprietary formats are the big hurdle to form a nationwide or even worldwide e-health network. Thus there is an imperative need to solve the medical data integration problem. Moreover, many small clinics, many hospitals in developing countries and some regional hospitals in developed countries, which have limited budget, have been shunned from embracing the latest medical information technologies due to their high costs. In this paper, we propose an XML based middleware which acts as a translation engine to seamlessly integrate clinical ECG data from a variety of proprietary data formats. Furthermore, this ECG translation engine is designed in a way that it can be integrated into an existing PACS to provide a low cost medical information integration and storage solution.

  3. LUNARINFO:A Data Archiving and Retrieving System for the Circumlunar Explorer Based on XML/Web Services

    Institute of Scientific and Technical Information of China (English)

    ZUO Wei; LI Chunlai; OUYANG Ziyuan; LIU Jianjun; XU Tao

    2004-01-01

    It is essential to build a modem information management system to store and manage data of our circumlunar explorer in order to realize the scientific objectives. It is difficult for an information system based on traditional distributed technology to communicate information and work together among heterogeneous systems in order to meet the new requirement of Intemet development. XML and Web Services, because of their open standards and self-containing properties, have changed the mode of information organization and data management. Now they can provide a good solution for building an open, extendable, and compatible information management system, and facilitate interchanging and transferring of data among heterogeneous systems. On the basis of the three-tiered browse/server architectures and the Oracle 9i Database as an information storage platform, we have designed and implemented a data archiving and retrieval system for the circumlunar explorer-LUNARINFO. We have also successfully realized the integration between LUNARINFO and the cosmic dust database system. LUNARINFO consists of five function modules for data management, information publishing, system management, data retrieval, and interface integration. Based on XML and Web Services, it not only is an information database system for archiving, long-term storing, retrieving and publication of lunar reference data related to the circumlunar explorer, but also provides data web Services which can be easily developed by various expert groups and connected to the common information system to realize data resource integration.

  4. Automated Individual Prescription of Exercise with an XML-based Expert System.

    Science.gov (United States)

    Jang, S; Park, S R; Jang, Y; Park, J; Yoon, Y; Park, S

    2005-01-01

    Continuously motivating people to exercise regularly is more important than finding a barriers such as lack of time, cost of equipment or gym membership, lack of nearby facilities and poor weather or night-time lighting. Our proposed system presents practicable methods of motivation through a web-based exercise prescription service. Users are instructed to exercise according to their physical ability by means of an automated individual prescription of exercise checked and approved by a personal trainer or exercise specialist after being tested with the HIMS, fitness assessment system. Furthermore, utilizing BIOFIT exercise prescriptions scheduled by an expert system can help users exercise systematically. Automated individual prescriptions are built in XML based documents because the data needs to be flexible, expansible and convertible structures to process diverse exercise templates. Web-based exercise prescription service makes users stay interested in exercise even if they live in many different environments.

  5. Factorizations and physical representations

    International Nuclear Information System (INIS)

    Revzen, M; Khanna, F C; Mann, A; Zak, J

    2006-01-01

    A Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the decomposition of M into prime numbers. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (Zak J 1970 Phys. Today 23 51), and related representations termed q 1 q 2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M

  6. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  7. Hierarchical representation of shapes in visual cortex - from localized features to figural shape segregation

    Directory of Open Access Journals (Sweden)

    Stephan eTschechne

    2014-08-01

    Full Text Available Visual structures in the environment are effortlessly segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. At this stage, highly articulated changes in shape boundary as well as very subtle curvature changes contribute to the perception of an object.We propose a recurrent computational network architecture that utilizes a hierarchical distributed representation of shape features to encode boundary features over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback from representations generated at higher stages. In so doing, global configurational as well as local information is available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. This combines separate findings about the generation of cortical shape representation using hierarchical representations with figure-ground segregation mechanisms.Our model is probed with a selection of artificial and real world images to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  8. Distinguishing Representations as Origin and Representations as Input: Roles for Individual Cells

    Directory of Open Access Journals (Sweden)

    Jonathan C.W. Edwards

    2016-09-01

    Full Text Available It is widely perceived that there is a problem in giving a naturalistic account of mental representation that deals adequately with meaning, interpretation or significance (semantic content. It is suggested here that this problem may arise partly from the conflation of two vernacular senses of representation: representation-as-origin and representation-as-input. The flash of a neon sign may in one sense represent a popular drink, but to function as representation it must provide an input to a ‘consumer’ in the street. The arguments presented draw on two principles – the neuron doctrine and the need for a venue for ‘presentation’ or ‘reception’ of a representation at a specified site, consistent with the locality principle. It is also argued that domains of representation cannot be defined by signal traffic, since they can be expected to include ‘null’ elements based on non-firing cells. In this analysis, mental representations-as-origin are distributed patterns of cell firing. Each firing cell is given semantic value in its own right - some form of atomic propositional significance – since different axonal branches may contribute to integration with different populations of signals at different downstream sites. Representations-as-input are patterns of local co-arrival of signals in the form of synaptic potentials in dendrites. Meaning then draws on the relationships between active and null inputs, forming ‘scenarios’ comprising a molecular combination of ‘premises’ from which a new output with atomic propositional significance is generated. In both types of representation, meaning, interpretation or significance pivots on events in an individual cell. (This analysis only applies to ‘occurrent’ representations based on current neural activity. The concept of representations-as-input emphasises the need for a ‘consumer’ of a representation and the dependence of meaning on the co-relationships involved in an

  9. XML and Graphs for Modeling, Integration and Interoperability:a CMS Perspective

    CERN Document Server

    van Lingen, Frank

    2004-01-01

    This thesis reports on a designer's Ph.D. project called “XML and Graphs for Modeling, Integration and Interoperability: a CMS perspective”. The project has been performed at CERN, the European laboratory for particle physics, in collaboration with the Eindhoven University of Technology and the University of the West of England in Bristol. CMS (Compact Muon Solenoid) is a next-generation high energy physics experiment at CERN, which will start running in 2007. The complexity of such a detector used in the experiment and the autonomous groups that are part of the CMS experiment, result in disparate data sources (different in format, type and structure). Users need to access and exchange data located in multiple heterogeneous sources in a domain-specific manner and may want to access a simple unit of information without having to understand details of the underlying schema. Users want to access the same information from several different heterogeneous sources. It is neither desirable nor fea...

  10. Intermediate neutron spectrum problems and the intermediate neutron spectrum experiment

    International Nuclear Information System (INIS)

    Jaegers, P.J.; Sanchez, R.G.

    1996-01-01

    Criticality benchmark data for intermediate energy spectrum systems does not exist. These systems are dominated by scattering and fission events induced by neutrons with energies between 1 eV and 1 MeV. Nuclear data uncertainties have been reported for such systems which can not be resolved without benchmark critical experiments. Intermediate energy spectrum systems have been proposed for the geological disposition of surplus fissile materials. Without the proper benchmarking of the nuclear data in the intermediate energy spectrum, adequate criticality safety margins can not be guaranteed. The Zeus critical experiment now under construction will provide this necessary benchmark data

  11. Multi-representation based on scientific investigation for enhancing students’ representation skills

    Science.gov (United States)

    Siswanto, J.; Susantini, E.; Jatmiko, B.

    2018-03-01

    This research aims to implementation learning physics with multi-representation based on the scientific investigation for enhancing students’ representation skills, especially on the magnetic field subject. The research design is one group pretest-posttest. This research was conducted in the department of mathematics education, Universitas PGRI Semarang, with the sample is students of class 2F who take basic physics courses. The data were obtained by representation skills test and documentation of multi-representation worksheet. The Results show gain analysis value of .64 which means some medium improvements. The result of t-test (α = .05) is shows p-value = .001. This learning significantly improves students representation skills.

  12. An XML-based Schema-less Approach to Managing Diagnostic Data in Heterogeneous Formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O. [Japan Atomic Energy Agency, Ibaraki (Japan)

    2009-07-01

    Managing diagnostic data in heterogeneous formats is always a nuisance, especially when a new diagnostic technique requires a new data structure that does not fit in the existing data format. Ideally, it is best to have an all-purpose schema that can specify any data structures. But devising such a schema is a difficult task and the resultant data management system tends to be large and complicated. As a complementary approach, we can think of a system that has no specific schema but requires each of the data to describe itself without assuming any prior information. In this paper, a very primitive implementation of such a system based on extensible Markup Language (XML) is examined. The actual implementation is no more than an addition of a tiny XML meta-data file that describes the detailed format of the associated diagnostic data file. There are many ways to write and read such meta-data files. For example, if the data are in a standard format that is foreign to the existing system, just specify the name of the format and what interface to use for reading the data. If the data are in a non-standard arbitrary format, write what is written and how into the meta-data file at every occurrence of data output. And as a last resort, if the format of the data is too complicated, a code to read the data can be stored in the meta-data file. Of course, this schema-less approach has some drawbacks, two of which are the doubling of the number of files to be managed and the low performance of data handling, though the former can be a merit, when it is necessary to update the meta-data leaving the body data intact. The important point is that the necessary information to read the data is decoupled from data itself. The merits and demerits of this approach are discussed. This document is composed of an abstract followed by the presentation slides. (author)

  13. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    Energy Technology Data Exchange (ETDEWEB)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu [Department of Mathematics, University of Southern California, Los Angeles, CA 90089 (United States); Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089 (United States); Ghanem, Roger G., E-mail: ghanem@usc.edu [Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089 (United States)

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  14. Representation in Memory.

    Science.gov (United States)

    Rumelhart, David E.; Norman, Donald A.

    This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…

  15. Attention and Representational Momentum

    OpenAIRE

    Hayes, Amy; Freyd, Jennifer J

    1995-01-01

    Representational momentum, the tendency for memory to be distorted in the direction of an implied transformation, suggests that dynamics are an intrinsic part of perceptual representations. We examined the effect of attention on dynamic representation by testing for representational momentum under conditions of distraction. Forward memory shifts increase when attention is divided. Attention may be involved in halting but not in maintaining dynamic representations.

  16. Conformal symmetry in two-dimensional space: recursion representation of conformal block

    International Nuclear Information System (INIS)

    Zamolodchikov, A.B.

    1988-01-01

    The four-point conformal block plays an important part in the analysis of the conformally invariant operator algebra in two-dimensional space. The behavior of the conformal block is calculated in the present paper in the limit in which the dimension Δ of the intermediate operator tends to infinity. This makes it possible to construct a recursion relation for this function that connects the conformal block at arbitrary Δ to the blocks corresponding to the dimensions of the zero vectors in the degenerate representations of the Virasoro algebra. The relation is convenient for calculating the expansion of the conformal block in powers of the uniformizing parameters q = i π tau

  17. LGBT Representations on Facebook : Representations of the Self and the Content

    OpenAIRE

    Chu, Yawen

    2017-01-01

    The topic of LGBT rights has been increasingly discussed and debated over recent years. More and more scholars show their interests in the field of LGBT representations in media. However, not many studies involved LGBT representations in social media. This paper explores LGBT representations on Facebook by analysing posts on an open page and in a private group, including both representations of the self as the identity of sexual minorities, content that is displayed on Facebook and the simila...

  18. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  19. Connecting the Real to the Representational: Historical Demographic Data in the Town of Pullman, 1880-1940

    Directory of Open Access Journals (Sweden)

    Andrew H. Bullen

    2007-12-01

    Full Text Available The Pullman House History Project is a part of the Pullman State Historic Site’s virtual museum and web site (http://www.pullman-museum.org/ which links together census, city directory, and telephone directory information to describe the people who lived in the town of Pullman, Illinois between 1881 and 1940. This demographic data is linked through a database/XML record system to online maps and Perl programs that allow the data to be represented in various useful combinations. This article describes the structure of the database and XML records, as well as the methods and code used to link the parts together and display the data.

  20. Representation Elements of Spatial Thinking

    Science.gov (United States)

    Fiantika, F. R.

    2017-04-01

    This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.