WorldWideScience

Sample records for selective key-oriented xml

  1. Vague element selection and query rewriting for XML retrieval

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; de Jong, Franciska M.G.; Kraaij, W.

    In this paper we present the extension of our prototype three-level database system (TIJAH) developed for structured information retrieval. The extension is aimed at modeling vague search on XML elements. All three levels (conceptual, logical, and physical) of the TIJAH system are enhanced to

  2. XML to XML through XML

    NARCIS (Netherlands)

    Lemmens, W.J.M.; Houben, G.J.P.M.

    2001-01-01

    XML documents are used to exchange data. Data exchange implies the transformation of the original data to a different structure. Often such transformations need to be adapted to some specific situation, like the rendering to non-standard platforms for display or the support of special user

  3. XML Files

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xml.html MedlinePlus XML Files To use the sharing features on this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download ...

  4. Invisible XML

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2013-01-01

    htmlabstractWhat if you could see everything as XML? XML has many strengths for data exchange, strengths both inherent in the nature of XML markup and strengths that derive from the ubiquity of tools that can process XML. For authoring, however, other forms are preferred: no one writes CSS or

  5. XML Transformations

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2012-04-01

    Full Text Available XSLT style sheets are designed to transform the XML documents into something else. The two most popular parsers of the moment are the Document Object Model (DOM and the Simple API for XML (SAX. DOM is an official recommendation of the W3C (available at http://www.w3.org/TR/REC-DOM-Level-1, while SAX is a de facto standard. A good parser should be fast, space efficient, rich in functionality and easy to use.

  6. Implementasi XML Encryption (XML Enc) Menggunakan Java

    OpenAIRE

    Tenia Wahyuningrum

    2012-01-01

    Seiring dengan semakin luasnya penggunaan XML pada berbagai layanan di internet, yang penyebaran informasinya sebagian besar menggunakan infrastruktur jaringan umum, maka mulai muncul permasalahan mengenai kebutuhan akan keamanan data bagi informasi yang terkandung didalam sebuah dokumen XML. Salah satu caranya adalah dengan menggunakan teknologi XML Enc. Pada makalah ini akan dibahas mengenai cara menggunakan XML Enc menggunakan bahasa pemrograman java, khususnya menyandikan dokumen XML (enk...

  7. XML under the Hood.

    Science.gov (United States)

    Scharf, David

    2002-01-01

    Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…

  8. Semantically Interoperable XML Data.

    Science.gov (United States)

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-09-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups.

  9. Semantically Interoperable XML Data

    Science.gov (United States)

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  10. ScotlandsPlaces XML: Bespoke XML or XML Mapping?

    Science.gov (United States)

    Beamer, Ashley; Gillick, Mark

    2010-01-01

    Purpose: The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross-domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.…

  11. XML Schema Representation of DICOM Structured Reporting.

    Science.gov (United States)

    Lee, K P; Hu, Jingkun

    2003-01-01

    The Digital Imaging and Communications in Medicine (DICOM) Structured Reporting (SR) standard improves the expressiveness, precision, and comparability of documentation about diagnostic images and waveforms. It supports the interchange of clinical reports in which critical features shown by images and waveforms can be denoted unambiguously by the observer, indexed, and retrieved selectively by subsequent reviewers. It is essential to provide access to clinical reports across the health care enterprise by using technologies that facilitate information exchange and processing by computers as well as provide support for robust and semantically rich standards, such as DICOM. This is supported by the current trend in the healthcare industry towards the use of Extensible Markup Language (XML) technologies for storage and exchange of medical information. The objective of the work reported here is to develop XML Schema for representing DICOM SR as XML documents. We briefly describe the document type definition (DTD) for XML and its limitations, followed by XML Schema (the intended replacement for DTD) and its features. A framework for generating XML Schema for representing DICOM SR in XML is presented next. None applicable. A schema instance based on an SR example in the DICOM specification was created and validated against the schema. The schema is being used extensively in producing reports on Philips Medical Systems ultrasound equipment. With the framework described it is feasible to generate XML Schema using the existing DICOM SR specification. It can also be applied to generate XML Schemas for other DICOM information objects.

  12. XML Schema Representation of DICOM Structured Reporting

    Science.gov (United States)

    Lee, K. P.; Hu, Jingkun

    2003-01-01

    Objective: The Digital Imaging and Communications in Medicine (DICOM) Structured Reporting (SR) standard improves the expressiveness, precision, and comparability of documentation about diagnostic images and waveforms. It supports the interchange of clinical reports in which critical features shown by images and waveforms can be denoted unambiguously by the observer, indexed, and retrieved selectively by subsequent reviewers. It is essential to provide access to clinical reports across the health care enterprise by using technologies that facilitate information exchange and processing by computers as well as provide support for robust and semantically rich standards, such as DICOM. This is supported by the current trend in the healthcare industry towards the use of Extensible Markup Language (XML) technologies for storage and exchange of medical information. The objective of the work reported here is to develop XML Schema for representing DICOM SR as XML documents. Design: We briefly describe the document type definition (DTD) for XML and its limitations, followed by XML Schema (the intended replacement for DTD) and its features. A framework for generating XML Schema for representing DICOM SR in XML is presented next. Measurements: None applicable. Results: A schema instance based on an SR example in the DICOM specification was created and validated against the schema. The schema is being used extensively in producing reports on Philips Medical Systems ultrasound equipment. Conclusion: With the framework described it is feasible to generate XML Schema using the existing DICOM SR specification. It can also be applied to generate XML Schemas for other DICOM information objects. PMID:12595410

  13. XML in Libraries.

    Science.gov (United States)

    Tennant, Roy, Ed.

    This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…

  14. XML Based Course Websites.

    Science.gov (United States)

    Wollowski, Michael

    XML, the extensible markup language, is a quickly evolving technology that presents a viable alternative to courseware products and promises to ease the burden of Web authors, who edit their course pages directly. XML uses tags to label kinds of contents, rather than format information. The use of XML enables faculty to focus on providing…

  15. XML: Ejemplos de uso

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  16. RelaXML

    DEFF Research Database (Denmark)

    Knudsen, Steffen Ulsø; Pedersen, Torben Bach; Thomsen, Christian

    In modern enterprises, almost all data is stored in relational databases. Additionally, most enterprises increasingly collaborate with other enterprises in long-running read-write workflows, primarily through XML-based data exchange technologies such as web services. However, bidirectional XML data...... exchange is cumbersome and must often be hand-coded, at considerable expense. This paper remedies the situation by proposing RELAXML, an automatic and effective approach to bidirectional XML-based exchange of relational data. RELAXML supports re-use through multiple inheritance, and handles both export...... of relational data to XML documents and (re-)import of XML documents with a large degree of flexibility in terms of the SQL statements and XML document structures supported. Import and export are formally defined so as to avoid semantic problems, and algorithms to implement both are given. A performance study...

  17. RelaXML

    DEFF Research Database (Denmark)

    Knudsen, Steffen Ulsø; Pedersen, Torben Bach; Thomsen, Christian

    exchange is cumbersome and must often be hand-coded, at considerable expense. This paper remedies the situation by proposing RELAXML, an automatic and effective approach to bidirectional XML-based exchange of relational data. RELAXML supports re-use through multiple inheritance, and handles both export...... of relational data to XML documents and (re-)import of XML documents with a large degree of flexibility in terms of the SQL statements and XML document structures supported. Import and export are formally defined so as to avoid semantic problems, and algorithms to implement both are given. A performance study......In modern enterprises, almost all data is stored in relational databases. Additionally, most enterprises increasingly collaborate with other enterprises in long-running read-write workflows, primarily through XML-based data exchange technologies such as web services. However, bidirectional XML data...

  18. XML and Free Text.

    Science.gov (United States)

    Riggs, Ken Roger

    2002-01-01

    Discusses problems with marking free text, text that is either natural language or semigrammatical but unstructured, that prevent well-formed XML from marking text for readily available meaning. Proposes a solution to mark meaning in free text that is consistent with the intended simplicity of XML versus SGML. (Author/LRW)

  19. XML Views: Part 1

    NARCIS (Netherlands)

    Rajugan, R.; Marik, V.; Retschitzegger, W.; Chang, E.; Dillon, T.; Stepankova, O.; Feng, L.

    The exponential growth and the nature of Internet and web-based applications made eXtensible Markup Language (XML) as the de-facto standard for data exchange and data dissemination. Now it is gaining momentum in replacing conventional data models for data representation. XML with its self-describing

  20. Automata, Logic, and XML

    OpenAIRE

    NEVEN, Frank

    2002-01-01

    We survey some recent developments in the broad area of automata and logic which are motivated by the advent of XML. In particular, we consider unranked tree automata, tree-walking automata, and automata over infinite alphabets. We focus on their connection with logic and on questions imposed by XML.

  1. Securing XML Documents

    Directory of Open Access Journals (Sweden)

    Charles Shoniregun

    2004-11-01

    Full Text Available XML (extensible markup language is becoming the current standard for establishing interoperability on the Web. XML data are self-descriptive and syntax-extensible; this makes it very suitable for representation and exchange of semi-structured data, and allows users to define new elements for their specific applications. As a result, the number of documents incorporating this standard is continuously increasing over the Web. The processing of XML documents may require a traversal of all document structure and therefore, the cost could be very high. A strong demand for a means of efficient and effective XML processing has posed a new challenge for the database world. This paper discusses a fast and efficient indexing technique for XML documents, and introduces the XML graph numbering scheme. It can be used for indexing and securing graph structure of XML documents. This technique provides an efficient method to speed up XML data processing. Furthermore, the paper explores the classification of existing methods impact of query processing, and indexing.

  2. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  3. XML Graphs in Program Analysis

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    2007-01-01

    XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation...... of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey four very different applications: XML in Java, Java Servlets and JSP, transformations between XML and non-XML data, and XSLT....

  4. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however......, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML. We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes...... of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema....

  5. Plug-and-Play XML

    Science.gov (United States)

    Schweiger, Ralf; Hoelzer, Simon; Altmann, Udo; Rieger, Joerg; Dudeck, Joachim

    2002-01-01

    The application of XML (Extensible Markup Language) is still costly. The authors present an approach to ease the development of XML applications. They have developed a Web-based framework that combines existing XML resources into a comprehensive XML application. The XML framework is model-driven, i.e., the authors primarily design XML document models (XML schema, document type definition), and users can enter, search, and view related XML documents using a Web browser. The XML model itself is flexible and might be composed of existing model standards. The second part of the paper relates the approach of the authors to some problems frequently encountered in the clinical documentation process. PMID:11751802

  6. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however......, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML. We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes...

  7. Expressiveness considerations of XML signatures

    DEFF Research Database (Denmark)

    Jensen, Meiko; Meyer, Christopher

    2011-01-01

    XML Signatures are used to protect XML-based Web Service communication against a broad range of attacks related to man-in-the-middle scenarios. However, due to the complexity of the Web Services specification landscape, the task of applying XML Signatures in a robust and reliable manner becomes...... more and more challenging. In this paper, we investigate this issue, describing how an attacker can still interfere with Web Services communication even in the presence of XML Signatures. Additionally, we discuss the interrelation of XML Signatures and XML Encryption, focussing on their security...

  8. XML Graphs in Program Analysis

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael I.

    2011-01-01

    of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey the use of XML graphs for program analysis with four very different languages: XACT (XML in Java), Java Servlets (Web application programming), XSugar......XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation...

  9. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  10. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2008-01-01

    of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema....

  11. Intelligent Search on XML Data

    NARCIS (Netherlands)

    Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.; Unknown, [Unknown

    2003-01-01

    Recently, we have seen a steep increase in the popularity and adoption of XML, in areas such as traditional databases, e-business, the scientific environment, and on the web. Querying XML documents and data efficiently is a challenging issue; this book approaches search on XML data by combining

  12. Storing XML Documents in Databases

    NARCIS (Netherlands)

    A.R. Schmidt; S. Manegold (Stefan); M.L. Kersten (Martin); L.C. Rivero; J.H. Doorn; V.E. Ferraggine

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML

  13. XML and Better Web Searching.

    Science.gov (United States)

    Jackson, Joe; Gilstrap, Donald L.

    1999-01-01

    Addresses the implications of the new Web metalanguage XML for searching on the World Wide Web and considers the future of XML on the Web. Compared to HTML, XML is more concerned with structure of data than documents, and these data structures should prove conducive to precise, context rich searching. (Author/LRW)

  14. Specifying OLAP Cubes On XML Data

    DEFF Research Database (Denmark)

    Jensen, Mikael Rune; Møller, Thomas Holmgren; Pedersen, Torben Bach

    in modern enterprises. In the data warehousing approach, selected information is extracted in advance and stored in a repository. This approach is used because of its high performance. However, in many situations a logical (rather than physical) integration of data is preferable. Previous web-based data......On-Line Analytical Processing (OLAP) enables analysts to gain insight into data through fast and interactive access to a variety of possible views on information, organized in a dimensional model. The demand for data integration is rapidly becoming larger as more and more information sources appear....... Extensible Markup Language (XML) is fast becoming the new standard for data representation and exchange on the World Wide Web. The rapid emergence of XML data on the web, e.g., business-to-business (B2B) ecommerce, is making it necessary for OLAP and other data analysis tools to handleXML data as well...

  15. Juwele in XML

    OpenAIRE

    Habekost, Engelbert

    2005-01-01

    In der Forschungsabteilung der Humboldt-Universität wird die Schriftenreihe »Öffentliche Vorlesungen« seit 2002 mit der Software FrameMaker produziert. Verbunden damit war die Umstellung des Produktionsprozesses auf eine XML-basierte Dokumenterstellung sowie die Inhouse-Betreuung der kompletten Druckvorstufe.

  16. XBRL: Beyond Basic XML

    Science.gov (United States)

    VanLengen, Craig Alan

    2010-01-01

    The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…

  17. System architecture with XML

    CERN Document Server

    Daum, Berthold

    2002-01-01

    XML is bringing together some fairly disparate groups into a new cultural clash: document developers trying to understand what a transaction is, database analysts getting upset because the relational model doesn''t fit anymore, and web designers having to deal with schemata and rule based transformations. The key to rising above the confusion is to understand the different semantic structures that lie beneath the standards of XML, and how to model the semantics to achieve the goals of the organization. A pure architecture of XML doesn''t exist yet, and it may never exist as the underlying technologies are so diverse. Still, the key to understanding how to build the new web infrastructure for electronic business lies in understanding the landscape of these new standards.If your background is in document processing, this book will show how you can use conceptual modeling to model business scenarios consisting of business objects, relationships, processes, and transactions in a document-centric way. Database des...

  18. phyloXML: XML for evolutionary biology and comparative genomics.

    Science.gov (United States)

    Han, Mira V; Zmasek, Christian M

    2009-10-27

    Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at http://www.phyloxml.org.

  19. Java facilities in processing XML files - JAXB and generating PDF reports

    Directory of Open Access Journals (Sweden)

    Danut-Octavian SIMION

    2008-01-01

    Full Text Available The paper presents the Java programming language facilities in working with XML files using JAXB (The Java Architecture for XML Binding technology and generating PDF reports from XML files using Java objects. The XML file can be an existing one and could contain the data about an entity (Clients for example or it might be the result of a SELECT-SQL statement. JAXB generates JAVA classes through xs rules and a Marshalling, Unmarshalling compiler. The PDF file is build from a XML file and uses XSL-FO formatting file and a Java ResultSet object.

  20. Querying XML Data with SPARQL

    Science.gov (United States)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  1. XML-Intensive software development

    OpenAIRE

    Ibañez Anfurrutia, Felipe

    2016-01-01

    168 p. 1. IntroducciónXML es un lenguaje de meta-etiquetas, es decir, puede ser utilizado fundamentalmentepara crear lenguajes de etiquetas . La presencia de XML es unfenómeno generalizado. Sin embargo, su juventud hace que los desarrolladores seenfrentan a muchos desafíos al utilizar XML en aplicaciones de vanguardia. Estatesis enfrenta XML a tres escenarios diferentes: intercambio de documentos,Líneas de Producto Software (LPS) y Lenguajes eSpecíficos de Dominio (LSD).El intercambio digi...

  2. Beginning XML, 5th Edition

    CERN Document Server

    Fawcett, Joe; Quin, Liam R E

    2012-01-01

    A complete update covering the many advances to the XML language The XML language has become the standard for writing documents on the Internet and is constantly improving and evolving. This new edition covers all the many new XML-based technologies that have appeared since the previous edition four years ago, providing you with an up-to-date introductory guide and reference. Packed with real-world code examples, best practices, and in-depth coverage of the most important and relevant topics, this authoritative resource explores both the advantages and disadvantages of XML and addresses the mo

  3. "The Wonder Years" of XML.

    Science.gov (United States)

    Gazan, Rich

    2000-01-01

    Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…

  4. XML Schema Languages: Beyond DTD.

    Science.gov (United States)

    Ioannides, Demetrios

    2000-01-01

    Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)

  5. How Does XML Help Libraries?

    Science.gov (United States)

    Banerjee, Kyle

    2002-01-01

    Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…

  6. XML Diagnostics Description Standard

    International Nuclear Information System (INIS)

    Neto, A.; Fernandes, H.; Varandas, C.; Lister, J.; Yonekawa, I.

    2006-01-01

    A standard for the self-description of fusion plasma diagnostics will be presented, based on the Extensible Markup Language (XML). The motivation is to maintain and organise the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money when problems arises. Since there is no existing standard to organise this kind of information, every Association stores and organises each experiment in different ways. This can lead to severe problems when the organisation schema is poorly documented or written in national languages. The exchange of scientists, researchers and engineers between laboratories is a common practice nowadays. Sometimes they have to install new diagnostics or to update existing ones and frequently they lose a great deal of time trying to understand the currently installed system. The most common problems are: no documentation available; the person who understands it has left; documentation written in the national language. Standardisation is the key to solving all the problems mentioned. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalisation (information at least in the native language and in English), a common XML schema will be proposed. This paper will also discuss an extension of these ideas to the self-description of ITER plant systems, since the problems will be identical. (author)

  7. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  8. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  9. The Cadmio XML healthcare record.

    Science.gov (United States)

    Barbera, Francesco; Ferri, Fernando; Ricci, Fabrizio L; Sottile, Pier Angelo

    2002-01-01

    The management of clinical data is a complex task. Patient related information reported in patient folders is a set of heterogeneous and structured data accessed by different users having different goals (in local or geographical networks). XML language provides a mechanism for describing, manipulating, and visualising structured data in web-based applications. XML ensures that the structured data is managed in a uniform and transparent manner independently from the applications and their providers guaranteeing some interoperability. Extracting data from the healthcare record and structuring them according to XML makes the data available through browsers. The MIC/MIE model (Medical Information Category/Medical Information Elements), which allows the definition and management of healthcare records and used in CADMIO, a HISA based project, is described in this paper, using XML for allowing the data to be visualised through web browsers.

  10. Engineering XML Solutions Using Views

    NARCIS (Netherlands)

    Rajugan, R.; Chang, E.; Dillon, T.S.; Feng, L.

    In industrial informatics, engineering data intensive Enterprise Information Systems (EIS) is a challenging task without abstraction and partitioning. Further, the introduction of semi-structured data (namely XML) and its rapid adaptation by the commercial and industrial systems increased the

  11. Health Topic XML File Description

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xmldescription.html Health Topic XML File Description: MedlinePlus To use the sharing ... information categories assigned. Example of a Full Health Topic Record A record for a MedlinePlus health topic ...

  12. XML specifications DanRIS

    DEFF Research Database (Denmark)

    2009-01-01

    XML specifications for DanRIS (Danish Registration- og InformationsSystem), where the the aim is: Improved exchange of data Improved data processing Ensuring future access to all gathered data from the year 1999 until now......XML specifications for DanRIS (Danish Registration- og InformationsSystem), where the the aim is: Improved exchange of data Improved data processing Ensuring future access to all gathered data from the year 1999 until now...

  13. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  14. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  15. Updating Recursive XML Views of Relations

    DEFF Research Database (Denmark)

    Choi, Byron; Cong, Gao; Fan, Wenfei

    2009-01-01

    This paper investigates the view update problem for XML views published from relational data. We consider XML views defined in terms of mappings directed by possibly recursive DTDs compressed into DAGs and stored in relations. We provide new techniques to efficiently support XML view updates...... specified in terms of XPath expressions with recursion and complex filters. The interaction between XPath recursion and DAG compression of XML views makes the analysis of the XML view update problem rather intriguing. Furthermore, many issues are still open even for relational view updates, and need...... to be explored. In response to these, on the XML side, we revise the notion of side effects and update semantics based on the semantics of XML views, and present effecient algorithms to translate XML updates to relational view updates. On the relational side, we propose a mild condition on SPJ views, and show...

  16. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  17. Static Analysis of XML Transformations in Java

    DEFF Research Database (Denmark)

    Kirkegaard, Christian; Møller, Anders; Schwartzbach, Michael I.

    2004-01-01

    of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce Xact, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values...

  18. Statistical Language Models for Intelligent XML Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database

  19. Statistical language Models for Intelligent XML Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Blanken, H.M.; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database

  20. Using XML to encode TMA DES metadata

    Directory of Open Access Journals (Sweden)

    Oliver Lyttleton

    2011-01-01

    Full Text Available Background: The Tissue Microarray Data Exchange Specification (TMA DES is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  1. Using XML to encode TMA DES metadata.

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  2. Using XML to encode TMA DES metadata

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921

  3. Enterprise Architecture Analysis with XML

    NARCIS (Netherlands)

    F.S. de Boer (Frank); M.M. Bonsangue (Marcello); J.F. Jacob (Joost); A. Stam; L.W.N. van der Torre (Leon)

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural

  4. Scripting XML with Generic Haskell

    NARCIS (Netherlands)

    Atanassow, F.; Clarke, D.; Jeuring, J.T.

    2003-01-01

    A generic program is written once and works on values of many data types. Generic Haskell is a recent extension of the functional programming language Haskell that supports generic programming. This paper discusses how Generic Haskell can be used to implement XML tools whose behaviour depends on

  5. Static Analysis for Dynamic XML

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Møller, Anders; Schwartzbach, Michael Ignatieff

    2002-01-01

    We describe the summary graph lattice for dataflow analysis of programs that dynamically construct XML documents. Summary graphs have successfully been used to provide static guarantees in the JWIG language for programming interactive Web services. In particular, the JWIG compiler is able to check...

  6. Scripting XML with Generic Haskell

    NARCIS (Netherlands)

    Atanassow, F.; Clarke, D.; Jeuring, J.T.

    2007-01-01

    A generic program is written once and works on values of many data types. Generic Haskell is a recent extension of the functional programming language Haskell that supports generic programming. This paper discusses how Generic Haskell can be used to implement XML tools whose behaviour depends on

  7. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  8. DICOM involving XML path-tag

    Science.gov (United States)

    Zeng, Qiang; Yao, Zhihong; Liu, Lei

    2011-03-01

    Digital Imaging and Communications in Medicine (DICOM) is a standard for handling, storing, printing, and transmitting information in medical imaging. XML (Extensible Markup Language) is a set of rules for encoding documents in machine-readable form which has become more and more popular. The combination of these two is very necessary and promising. Using XML tags instead of numeric labels in DICOM files will effectively increase the readability and enhance the clear hierarchical structure of DICOM files. However, due to the fact that the XML tags rely heavily on the orders of the tags, the strong data dependency has a lot of influence on the flexibility of inserting and exchanging data. In order to improve the extensibility and sharing of DICOM files, this paper introduces XML Path-Tag to DICOM. When a DICOM file is converted to XML format, adding simple Path-Tag into the DICOM file in place of complex tags will keep the flexibility of a DICOM file while inserting data elements and give full play to the advantages of the structure and readability of an XML file. Our method can solve the weak readability problem of DICOM files and the tedious work of inserting data into an XML file. In addition, we set up a conversion engine that can transform among traditional DICOM files, XML-DCM and XML-DCM files involving XML Path-Tag efficiently.

  9. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Enhancing organizational performance : a toolbox for self-assessment .... a smaller, problem-based exercise focusing on a specific area or situation. ..... Regardless of the structure, select team members carefully, according to the role to ..... An indicator is a measuring device that allows you to clarify and measure a concept.

  10. Enterprise Architecture Analysis with XML

    OpenAIRE

    Boer, Frank; Bonsangue, Marcello; Jacob, Joost; Stam, A.; Torre, Leon

    2005-01-01

    htmlabstractThis paper shows how XML can be used for static and dynamic analysis of architectures. Our analysis is based on the distinction between symbolic and semantic models of architectures. The core of a symbolic model consists of its signature that specifies symbolically its structural elements and their relationships. A semantic model is defined as a formal interpretation of the symbolic model. This provides a formal approach to the design of architectural description languages and a g...

  11. XML Translator for Interface Descriptions

    Science.gov (United States)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  12. Speed up of XML parsers with PHP language implementation

    Science.gov (United States)

    Georgiev, Bozhidar; Georgieva, Adriana

    2012-11-01

    In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.

  13. XML technology planning database : lessons learned

    Science.gov (United States)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  14. XML Syntax for Clinical Laboratory Procedure Manuals

    OpenAIRE

    Saadawi, Gilan; Harrison, James H.

    2003-01-01

    We have developed a document type description (DTD) in Extensable Markup Language (XML)1 for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large b...

  15. Assessing XML Data Management with XMark

    OpenAIRE

    Schmidt, A.R.; Waas, F.; Kersten, Martin; Carey, M.J.; Manolescu, I.; Busse, R.

    2002-01-01

    textabstractWe discuss some of the experiences we gathered during the development and deployment of XMark, a tool to assess the infrastructure and performance of XML Data Management Systems. Since the appearance of the first XML database prototypes in research institutions and development labs, topics like validation, performance evaluation and optimization of XML query processors have received significant interest. The XMark benchmark follows a tradition in database research and provides a f...

  16. XML Publishing with Adobe InDesign

    CERN Document Server

    Hoskins, Dorothy

    2010-01-01

    From Adobe InDesign CS2 to InDesign CS5, the ability to work with XML content has been built into every version of InDesign. Some of the useful applications are importing database content into InDesign to create catalog pages, exporting XML that will be useful for subsequent publishing processes, and building chunks of content that can be reused in multiple publications. In this Short Cut, we'll play with the contents of a college course catalog and see how we can use XML for course descriptions, tables, and other content. Underlying principles of XML structure, DTDs, and the InDesign namesp

  17. XML-based analysis interface for particle physics data analysis

    International Nuclear Information System (INIS)

    Hu Jifeng; Lu Xiaorui; Zhang Yangheng

    2011-01-01

    The letter emphasizes on an XML-based interface and its framework for particle physics data analysis. The interface uses a concise XML syntax to describe, in data analysis, the basic tasks: event-selection, kinematic fitting, particle identification, etc. and a basic processing logic: the next step goes on if and only if this step succeeds. The framework can perform an analysis without compiling by loading the XML-interface file, setting p in run-time and running dynamically. An analysis coding in XML instead of C++, easy-to-understood arid use, effectively reduces the work load, and enables users to carry out their analyses quickly. The framework has been developed on the BESⅢ offline software system (BOSS) with the object-oriented C++ programming. These functions, required by the regular tasks and the basic processing logic, are implemented with both standard modules or inherited from the modules in BOSS. The interface and its framework have been tested to perform physics analysis. (authors)

  18. XML documents cluster research based on frequent subpatterns

    Science.gov (United States)

    Ding, Tienan; Li, Wei; Li, Xiongfei

    2015-12-01

    XML data is widely used in the information exchange field of Internet, and XML document data clustering is the hot research topic. In the XML document clustering process, measure differences between two XML documents is time costly, and impact the efficiency of XML document clustering. This paper proposed an XML documents clustering method based on frequent patterns of XML document dataset, first proposed a coding tree structure for encoding the XML document, and translate frequent pattern mining from XML documents into frequent pattern mining from string. Further, using the cosine similarity calculation method and cohesive hierarchical clustering method for XML document dataset by frequent patterns. Because of frequent patterns are subsets of the original XML document data, so the time consumption of XML document similarity measure is reduced. The experiment runs on synthetic dataset and the real datasets, the experimental result shows that our method is efficient.

  19. On the effectiveness of XML schema validation for countering XML signature wrapping attacks

    DEFF Research Database (Denmark)

    Jensen, Meiko; Meyer, Christopher; Somorovsky, Juraj

    2011-01-01

    In the context of security of Web Services, the XML Signature Wrapping attack technique has lately received increasing attention. Following a broad range of real-world exploits, general interest in applicable countermeasures rises. However, few approaches for countering these attacks have been...... investigated closely enough to make any claims about their effectiveness. In this paper, we analyze the effectiveness of the specific countermeasure of XML Schema validation in terms of fending Signature Wrapping attacks. We investigate the problems of XML Schema validation for Web Services messages......, and discuss the approach of Schema Hardening, a technique for strengthening XML Schema declarations. We conclude that XML Schema validation with a hardened XML Schema is capable of fending XML Signature Wrapping attacks, but bears some pitfalls and disadvantages as well....

  20. XML a bezpečnost I

    Czech Academy of Sciences Publication Activity Database

    Brechlerová, Dagmar

    2007-01-01

    Roč. 9, č. 1 (2007), s. 13-25 ISSN 1801-2140 R&D Projects: GA AV ČR 1ET200300413 Institutional research plan: CEZ:AV0Z10300504 Keywords : XML security * XML digitální podpis * XKMS Subject RIV: IN - Informatics, Computer Science http://crypto-world.info/index2.php

  1. TIJAH: Embracing IR Methods in XML Databases

    NARCIS (Netherlands)

    List, Johan; Mihajlovic, V.; Ramirez, Georgina; de Vries, A.P.; Hiemstra, Djoerd; Blok, H.E.

    2005-01-01

    This paper discusses our participation in INEX (the Initiative for the Evaluation of XML Retrieval) using the TIJAH XML-IR system. TIJAH's system design follows a `standard' layered database architecture, carefully separating the conceptual, logical and physical levels. At the conceptual level, we

  2. XAL: An algebra for XML query optimization

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Pau, C.D.; Zhou, Xiaofang

    2002-01-01

    This paper proposes XAL, an XML ALgebra. Its novelty is based on the simplicity of its data model and its well-defined logical operators, which makes it suitable for composability, optimizability, and semantics definition of a query language for XML data. At the heart of the algebra resides the

  3. Compressing Aviation Data in XML Format

    Science.gov (United States)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  4. Get It Together: Integrating Data with XML.

    Science.gov (United States)

    Miller, Ron

    2003-01-01

    Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…

  5. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  6. How will XML impact industrial automation?

    CERN Multimedia

    Pinceti, P

    2002-01-01

    A working group of the World Wide Web Consortium (W3C) has overcome the limits of both HTML and SGML with the definition of the extensible markup language - XML. This article looks at how XML will affect industrial automation (2 pages).

  7. The OLAP-XML Federation System

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    We present the logical “OLAP-XML Federation System” that enables the external data available in XML format to be used as virtual dimensions. Unlike the complex and time-consuming physical integration of OLAP and external data in current OLAP systems, our system makes OLAP queries referencing fast...

  8. XML, TEI, and Digital Libraries in the Humanities.

    Science.gov (United States)

    Nellhaus, Tobin

    2001-01-01

    Describes the history and major features of XML and TEI, discusses their potential utility for the creation of digital libraries, and focuses on XML's application in the humanities, particularly theater and drama studies. Highlights include HTML and hyperlinks; the impact of XML on text encoding and document access; and XML and academic…

  9. An Introduction to the Extensible Markup Language (XML).

    Science.gov (United States)

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  10. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  11. An Introduction to XML and Web Technologies

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    , building on top of the early foundations. This book offers a comprehensive introduction to the area. There are two main threads of development, corresponding to the two parts of this book. XML technologies generalize the notion of data on the Web from hypertext documents to arbitrary data, including those...... that have traditionally been the realm of databases. In this book we cover the basic XML technology and the supporting technologies of XPath, DTD, XML Schema, DSD2, RELAX NG, XSLT, XQuery, DOM, JDOM, JAXB, SAX, STX, SDuce, and XACT. Web technologies build on top of the HTTP protocol to provide richer...

  12. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  13. Integrity Based Access Control Model for Multilevel XML Document

    Institute of Scientific and Technical Information of China (English)

    HONG Fan; FENG Xue-bin; HUANO Zhi; ZHENG Ming-hui

    2008-01-01

    XML's increasing popularity highlights the security demand for XML documents. A mandatory access control model for XML document is presented on the basis of investigation of the function dependency of XML documents and discussion of the integrity properties of multilevel XML document. Then, the algorithms for decomposition/recovery multilevel XML document into/from single level document are given, and the manipulation rules for typical operations of XQuery and XUpdate: QUERY, INSERT,UPDATE, and REMOVE, are elaborated. The multilevel XML document access model can meet the requirement of sensitive information processing application.

  14. XML Schema Guide for Primary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  15. XML Flight/Ground Data Dictionary Management

    Science.gov (United States)

    Wright, Jesse; Wiklow, Colette

    2007-01-01

    A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.

  16. Desain Sistem Keamanan Distribusi Data Dengan Menerapkan XML Encryption Dan XML Signature Berbasis Teknologi Web Service

    Directory of Open Access Journals (Sweden)

    Slamet Widodo

    2012-01-01

    Full Text Available Development of information technologies is often misused by an organization or a person to take criminal acts, such as the ability to steal and modify information in the data distribution for evil criminal purpose. The Rural Bank of Boyolali is conducting online financial transactions rather intensively, thus it requiring a security system on the distribution of data and credit transactions for their customer among branches offices to head office. The purpose of this study was to build a security system in credit transactions in Rural Bank of Boyolali for their customers among branches offices to head office. One way in protecting data distribution was used XML encryption and XML signature. The application of encryption technique in XML and digital signature in XML by using web service by using the AES (Advanced Encryption Standard and RSA (Rivest-Shamir-Adleman algorithms. This study was resulted the SOAP (Simple Object Access Protocol message security system, with XML and WSDL (Web Services Description Language, over HTTP (Hypertext Transfer Protocol to protect the customers’ credit transactions from intruders. Analysis of examination indicated that the data size (bytes transferred as results of uncompressed XML encryption were larger than compressed XML Encryption, which leads to significant changes between the data transferred that was the processing time of the compressed data was faster than uncompressed XML encryption.

  17. XML in Projects GNU Gama and 3DGI

    DEFF Research Database (Denmark)

    Kolar, Jan; Soucek, Petr; Cepek, Ales

    2003-01-01

    This paper presents our practical experiences with XML in geodetic and geographical applications. The main concepts and ideas of XML are introduced in an example of a simple web based information system, which exploits the XHTML language. The article further describes how XML is used in GNU Gama...... for structuring data for a geodetic network adjustment. In another application of XML, it is demonstrated how XML can be used for a unified description of data from leveling registration units. Finally, the use of XML for modelling 3D geographical features within the 3DGI project is presented and a relation...

  18. XML, Ontologies, and Their Clinical Applications.

    Science.gov (United States)

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  19. XML-based DICOM data format.

    Science.gov (United States)

    Yu, Cong; Yao, Zhihong

    2010-04-01

    To enhance the readability, improve the structure, and facilitate the sharing of digital imaging and communications in medicine (DICOM) files, this research proposed one kind of XML-based DICOM data format. Because XML Schema offers great flexibility for expressing constraints on the content model of elements, we used it to describe the new format, thus making it consistent with the one originally defined by DICOM. Meanwhile, such schemas can be used in the creation and validation of the XML-encoded DICOM files, acting as a standard for data transmission and sharing on the Web. Upon defining the new data format, we started with representing a single data element and further improved the whole data structure with the method of modularization. In contrast to the original format, the new one possesses better structure without loss of related information. In addition, we demonstrated the application of XSLT and XQuery. All of the advantages mentioned above resulted from this new data format.

  20. Constructing an XML database of linguistics data

    Directory of Open Access Journals (Sweden)

    J H Kroeze

    2010-04-01

    Full Text Available A language-oriented, multi-dimensional database of the linguistic characteristics of the Hebrew text of the Old Testament can enable researchers to do ad hoc queries. XML is a suitable technology to transform free text into a database. A clause’s word order can be kept intact while other features such as syntactic and semantic functions can be marked as elements or attributes. The elements or attributes from the XML “database” can be accessed and proces sed by a 4th generation programming language, such as Visual Basic. XML is explored as an option to build an exploitable database of linguistic data by representing inherently multi-dimensional data, including syntactic and semantic analyses of free text.

  1. XML Based Scientific Data Management Facility

    Science.gov (United States)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  2. Modeling business objects with XML schema

    CERN Document Server

    Daum, Berthold

    2003-01-01

    XML Schema is the new language standard from the W3C and the new foundation for defining data in Web-based systems. There is a wealth of information available about Schemas but very little understanding of how to use this highly formal specification for creating documents. Grasping the power of Schemas means going back to the basics of documents themselves, and the semantic rules, or grammars, that define them. Written for schema designers, system architects, programmers, and document authors, Modeling Business Objects with XML Schema guides you through understanding Schemas from the basic concepts, type systems, type derivation, inheritance, namespace handling, through advanced concepts in schema design.*Reviews basic XML syntax and the Schema recommendation in detail.*Builds a knowledge base model step by step (about jazz music) that is used throughout the book.*Discusses Schema design in large environments, best practice design patterns, and Schema''s relation to object-oriented concepts.

  3. XML for data representation and model specification in neuroscience.

    Science.gov (United States)

    Crook, Sharon M; Howell, Fred W

    2007-01-01

    EXtensible Markup Language (XML) technology provides an ideal representation for the complex structure of models and neuroscience data, as it is an open file format and provides a language-independent method for storing arbitrarily complex structured information. XML is composed of text and tags that explicitly describe the structure and semantics of the content of the document. In this chapter, we describe some of the common uses of XML in neuroscience, with case studies in representing neuroscience data and defining model descriptions based on examples from NeuroML. The specific methods that we discuss include (1) reading and writing XML from applications, (2) exporting XML from databases, (3) using XML standards to represent neuronal morphology data, (4) using XML to represent experimental metadata, and (5) creating new XML specifications for models.

  4. An introduction to XML query processing and keyword search

    CERN Document Server

    Lu, Jiaheng

    2013-01-01

    This book systematically and comprehensively covers the latest advances in XML data searching. It presents an extensive overview of the current query processing and keyword search techniques on XML data.

  5. Towards the XML schema measurement based on mapping between XML and OO domain

    Science.gov (United States)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  6. Achieving Adaptivity For OLAP-XML Federations

    DEFF Research Database (Denmark)

    Pedersen, D.; Pedersen, Torben Bach

    2003-01-01

    Motivated by the need for more flexible OLAP systems, this paper presents the results of work on logical integration of external data in OLAP databases, carried out in cooperation between the Danish OLAP client vendor \\targit and Aalborg University. Flexibility is ensured by supporting XML......'s ability to adapt to changes in its surroundings. This paper describes the potential problems that may interrupt the operation of the integration system, in particular those caused by the often autonomous and unreliable nature of external XML data sources, and methods for handling these problems...

  7. XML Schema Guide for Secondary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  8. Performance analysis of Java APIS for XML processing

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as ...

  9. Processing XML with Java – a performance benchmark

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, suc...

  10. PDBML: the representation of archival macromolecular structure data in XML.

    Science.gov (United States)

    Westbrook, John; Ito, Nobutoshi; Nakamura, Haruki; Henrick, Kim; Berman, Helen M

    2005-04-01

    The Protein Data Bank (PDB) has recently released versions of the PDB Exchange dictionary and the PDB archival data files in XML format collectively named PDBML. The automated generation of these XML files is driven by the data dictionary infrastructure in use at the PDB. The correspondences between the PDB dictionary and the XML schema metadata are described as well as the XML representations of PDB dictionaries and data files.

  11. Representing User Navigation in XML Retrieval with Structural Summaries

    DEFF Research Database (Denmark)

    Ali, M. S.; Consens, Mariano P.; Larsen, Birger

    This poster presents a novel way to represent user navigation in XML retrieval using collection statistics from XML summaries. Currently, developing user navigation models in XML retrieval is costly and the models are specific to collected user assessments. We address this problem by proposing...

  12. ANALISIS KOMUNIKASI DATA DENGAN XML DAN JSON PADA WEBSERVICE

    Directory of Open Access Journals (Sweden)

    Sudirman M.Kom

    2016-08-01

    Full Text Available Abstrak— Ukuran data pada proses komunikasi data menggunakan web service dalam jaringan akan sangat memengaruhi kecepatan proses transfer. XML dan JSON merupakan format data yang digunakan pada saat komunikasi data pada web service. JSON akan menghasilkan ukuran data yang lebih kecil jika dibandingkan dengan format XML. Keywords— komunikasi data, web service, XML, JSON.

  13. Utilizing Structural Knowledge for Information Retrieval in XML Databases

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; Apers, Peter M.G.

    In this paper we address the problem of immediate translation of eXtensible Mark-up Language (XML) information retrieval (IR) queries to relational database expressions and stress the benefits of using an intermediate XML-specific algebra over relational algebra. We show how adding an XML-specific

  14. XML and E-Journals: The State of Play.

    Science.gov (United States)

    Wusteman, Judith

    2003-01-01

    Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…

  15. A Survey in Indexing and Searching XML Documents.

    Science.gov (United States)

    Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James

    2002-01-01

    Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…

  16. Building adaptable and reusable XML applications with model transformations

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas

    2005-01-01

    We present an approach in which the semantics of an XML language is defined by means of a transformation from an XML document model (an XML schema) to an application specific model. The application specific model implements the intended behavior of documents written in the language. A transformation

  17. XML: How It Will Be Applied to Digital Library Systems.

    Science.gov (United States)

    Kim, Hyun-Hee; Choi, Chang-Seok

    2000-01-01

    Shows how XML is applied to digital library systems. Compares major features of XML with those of HTML and describes an experimental XML-based metadata retrieval system, which is based on the Dublin Core and is designed as a subsystem of the Korean Virtual Library and Information System (VINIS). (Author/LRW)

  18. Generando datos XML a partir de bases de datos relacionales

    OpenAIRE

    Migani, Silvina; Correa, Carlos; Vera, Cristina; Romera, Liliana

    2012-01-01

    El lenguaje XML, los lenguajes que permiten manipular datos XML, y su impacto en el mundo de las bases de datos, es el área donde este proyecto se desarrolla. Surge como una iniciativa de docentes del área bases de datos, con la finalidad de profundizar en el estudio de XML y experimentar motores de bases de datos que lo soportan.

  19. XML: Ejemplos de uso (presentación)

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  20. A comparison of database systems for XML-type data

    NARCIS (Netherlands)

    Risse, J.E.; Leunissen, J.A.M.

    2010-01-01

    Background: In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability

  1. Treating JSON as a subset of XML

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2012-01-01

    textabstractXForms 1.0 was an XML technology originally designed as a replacement for HTML Forms. In addressing certain shortcomings of XForms 1.0, the next version, XForms 1.1 became far more than a forms language, but a declarative application language where application production time could be

  2. IR and OLAP in XML document warehouses

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    In this paper we propose to combine IR and OLAP (On-Line Analytical Processing) technologies to exploit a warehouse of text-rich XML documents. In the system we plan to develop, a multidimensional implementation of a relevance modeling document model will be used for interactively querying...

  3. A transaction model for XML databases

    NARCIS (Netherlands)

    Dekeyser, S.; Hidders, A.J.H.; Paredaens, J.

    2004-01-01

    Abstract The hierarchical and semistructured nature of XML data may cause complicated update behavior. Updates should not be limited to entire document trees, but should ideally involve subtrees and even individual elements. Providing a suitable scheduling algorithm for semistructured data can

  4. Implementing XML Schema Naming and Design Rules

    Energy Technology Data Exchange (ETDEWEB)

    Lubell, Joshua [National Institute of Standards and Technology (NIST); Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Harvey, Betty [Electronic Commerce Connection, Inc.

    2006-08-01

    We are building a methodology and tool kit for encoding XML schema Naming and Design Rules (NDRs) in a computer-interpretable fashion, enabling automated rule enforcement and improving schema quality. Through our experience implementing rules from various NDR specifications, we discuss some issues and offer practical guidance to organizations grappling with NDR development.

  5. XVCL: XML-based Variant Configuration Language

    DEFF Research Database (Denmark)

    Jarzabek, Stan; Basset, Paul; Zhang, Hongyu

    2003-01-01

    XVCL (XML-based Variant Configuration Language) is a meta-programming technique and tool that provides effective reuse mechanisms. XVCL is an open source software developed at the National University of Singapore. Being a modern and versatile version of Bassett's frames, a technology that has...

  6. DICOM supported sofware configuration by XML files

    International Nuclear Information System (INIS)

    LucenaG, Bioing Fabian M; Valdez D, Andres E; Gomez, Maria E; Nasisi, Oscar H

    2007-01-01

    A method for the configuration of informatics systems that provide support to DICOM standards using XML files is proposed. The difference with other proposals is base on that this system does not code the information of a DICOM objects file, but codes the standard itself in an XML file. The development itself is the format for the XML files mentioned, in order that they can support what DICOM normalizes for multiple languages. In this way, the same configuration file (or files) can be use in different systems. Jointly the XML configuration file generated, we wrote also a set of CSS and XSL files. So the same file can be visualized in a standard browser, as a query system of DICOM standard, emerging use, that did not was a main objective but brings a great utility and versatility. We exposed also some uses examples of the configuration file mainly in relation with the load of DICOM information objects. Finally, at the conclusions we show the utility that the system has already provided when the edition of DICOM standard changes from 2006 to 2007

  7. Type Checking with XML Schema in XACT

    DEFF Research Database (Denmark)

    Kirkegaard, Christian; Møller, Anders

    to support XML Schema as type formalism. The technique is able to model advanced features, such as type derivations and overloaded local element declarations, and also datatypes of attribute values and character data. Moreover, we introduce optional type annotations to improve modularity of the type checking...

  8. Shuttle-Data-Tape XML Translator

    Science.gov (United States)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  9. Using small XML elements to support relevance

    NARCIS (Netherlands)

    G. Ramirez Camps (Georgina); T.H.W. Westerveld (Thijs); A.P. de Vries (Arjen)

    2006-01-01

    htmlabstractSmall XML elements are often estimated relevant by the retrieval model but they are not desirable retrieval units. This paper presents a generic model that exploits the information obtained from small elements. We identify relationships between small and relevant elements and use this

  10. XTCE. XML Telemetry and Command Exchange Tutorial

    Science.gov (United States)

    Rice, Kevin; Kizzort, Brad; Simon, Jerry

    2010-01-01

    An XML Telemetry Command Exchange (XTCE) tutoral oriented towards packets or minor frames is shown. The contents include: 1) The Basics; 2) Describing Telemetry; 3) Describing the Telemetry Format; 4) Commanding; 5) Forgotten Elements; 6) Implementing XTCE; and 7) GovSat.

  11. Flight Dynamic Model Exchange using XML

    Science.gov (United States)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  12. Converting from XML to HDF-EOS

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program recreates an HDF-EOS file from an Extensible Markup Language (XML) representation of the contents of that file. This program is one of two programs written to enable testing of the schemas described in the immediately preceding article to determine whether the schemas capture all details of HDF-EOS files.

  13. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  14. Interpreting XML documents via an RDF schema

    NARCIS (Netherlands)

    Klein, Michel; Handschuh, Siegfried; Staab, Steffen

    2003-01-01

    One of the major problems in the realization of the vision of the ``Semantic Web''; is the transformation of existing web data into sources that can be processed and used by machines. This paper presents a procedure that can be used to turn XML documents into knowledge structures, by interpreting

  15. Algebra-Based Optimization of XML-Extended OLAP Queries

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    In today’s OLAP systems, integrating fast changing data, e.g., stock quotes, physically into a cube is complex and time-consuming. The widespread use of XML makes it very possible that this data is available in XML format on the WWW; thus, making XML data logically federated with OLAP systems...... is desirable. This report presents a complete foundation for such OLAP-XML federations. This includes a prototypical query engine, a simplified query semantics based on previous work, and a complete physical algebra which enables precise modeling of the execution tasks of an OLAP-XML query. Effective algebra...

  16. StarDOM: From STAR format to XML

    International Nuclear Information System (INIS)

    Linge, Jens P.; Nilges, Michael; Ehrlich, Lutz

    1999-01-01

    StarDOM is a software package for the representation of STAR files as document object models and the conversion of STAR files into XML. This allows interactive navigation by using the Document Object Model representation of the data as well as easy access by XML query languages. As an example application, the entire BioMagResBank has been transformed into XML format. Using an XML query language, statistical queries on the collected NMR data sets can be constructed with very little effort. The BioMagResBank/XML data and the software can be obtained at http://www.nmr.embl-heidelberg.de/nmr/StarDOM/

  17. An exponentiation method for XML element retrieval.

    Science.gov (United States)

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  18. Internet-based data interchange with XML

    Science.gov (United States)

    Fuerst, Karl; Schmidt, Thomas

    2000-12-01

    In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.

  19. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  20. An Exponentiation Method for XML Element Retrieval

    Science.gov (United States)

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  1. KNOWLEDGE AND XML BASED CAPP SYSTEM

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shijie; SONG Laigang

    2006-01-01

    In order to enhance the intelligent level of system and improve the interactivity with other systems, a knowledge and XML based computer aided process planning (CAPP) system is implemented. It includes user management, bill of materials(BOM) management, knowledge based process planning, knowledge management and database maintaining sub-systems. This kind of nesting knowledge representation method the system provided can represent complicated arithmetic and logical relationship to deal with process planning tasks. With the representation and manipulation of XML based technological file, the system solves some important problems in web environment such as information interactive efficiency and refreshing of web page. The CAPP system is written in ASP VBScript, JavaScript, Visual C++ languages and Oracle database. At present, the CAPP system is running in Shenyang Machine Tools. The functions of it meet the requirements of enterprise production.

  2. XML for Detector Description at GLAST

    Energy Technology Data Exchange (ETDEWEB)

    Bogart, Joanne

    2002-04-30

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector.

  3. XML for detector description at GLAST

    International Nuclear Information System (INIS)

    Bogart, J.; Favretto, D.; Giannitrapani, R.

    2001-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. The author describes its use for a GLAST detector

  4. XML for Detector Description at GLAST

    International Nuclear Information System (INIS)

    Bogart, Joanne

    2002-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector

  5. The curse of namespaces in the domain of XML signature

    DEFF Research Database (Denmark)

    Jensen, Meiko; Liao, Lijun; Schwenk, Jörg

    2009-01-01

    The XML signature wrapping attack is one of the most discussed security issues of the Web Services security community during the last years. Until now, the issue has not been solved, and all countermeasure approaches proposed so far were shown to be insufficient. In this paper, we present yet...... another way to perform signature wrapping attacks by using the XML namespace injection technique. We show that the interplay of XML Signature, XPath, and the XML namespace concept has severe flaws that can be exploited for an attack, and that XML namespaces in general pose real troubles to digital...... signatures in the XML domain. Additionally, we present and discuss some new approaches in countering the proposed attack vector....

  6. Design of the XML Security System for Electronic Commerce Application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The invocation of World Wide Web (www) first triggered mass adoption of the Internet for public access to digital information exchanges across the globe. To get a big market on the Web, a special security infrastructure would need to be put into place transforming the wild-and-woolly Internet into a network with end-to-end protections. XML (extensible Markup Language) is widely accepted as powerful data representation standard for electronic documents, so a security mechanism for XML documents must be provided in the first place to secure electronic commerce over Internet. In this paper the authors design and implement a secure framework that provides XML signature function, XML Element-wise Encryption function, smart card based crypto API library and Public Key Infrastructure (PKI) security functions to achieve confidentiality, integrity, message authentication, and/or signer authentication services for XML documents and existing non-XML documents that are exchanged by Internet for E-commerce application.

  7. Exploring PSI-MI XML Collections Using DescribeX

    Directory of Open Access Journals (Sweden)

    Samavi Reza

    2007-12-01

    Full Text Available PSI-MI has been endorsed by the protein informatics community as a standard XML data exchange format for protein-protein interaction datasets. While many public databases support the standard, there is a degree of heterogeneity in the way the proposed XML schema is interpreted and instantiated by different data providers. Analysis of schema instantiation in large collections of XML data is a challenging task that is unsupported by existing tools.

  8. An XML-based framework for personalized health management.

    Science.gov (United States)

    Lee, Hiye-Ja; Park, Seung-Hun; Jeong, Byeong-Soo

    2006-01-01

    This paper proposes a framework for personalized health management. In this framework, XML technology is used for representing and managing the health information and knowledge. Major components of the framework are Health Management Prescription (HMP) Expert System and Health Information Repository. The HMP Expert System generates a HMP efficiently by using XML-based templates. Health Information Repository provides integrated health information and knowledge for personalized health management by using XML and relational database together.

  9. Sample Scripts for Generating PaGE-OM XML [

    Lifescience Database Archive (English)

    Full Text Available Sample Scripts for Generating PaGE-OM XML This page is offering some sample scripts...on MySQL. Outline chart of procedure 6. Creating RDB tables for Generating PaGE-OM XML These scripts help yo...wnload: create_tables_sql2.zip 7. Generating PaGE-OM XML from phenotype data This sample Perl script helps y

  10. Integrating XML Data in the TARGIT OLAP System

    DEFF Research Database (Denmark)

    Pedersen, Dennis; Pedersen, Jesper; Pedersen, Torben Bach

    2004-01-01

    This paper presents work on logical integration of OLAP and XML data sources, carried out in cooperation between TARGIT, a Danish OLAP client vendor, and Aalborg University. A prototype has been developed that allows XML data on the WWW to be used as dimensions and measures in the OLAP system...... the ability to use XML data as measures, as well as a novel multigranular data model and query language that formalizes and extends the TARGIT data model and query language....

  11. TX-Kw: An Effective Temporal XML Keyword Search

    OpenAIRE

    Rasha Bin-Thalab; Neamat El-Tazi; Mohamed E.El-Sharkawi

    2013-01-01

    Inspired by the great success of information retrieval (IR) style keyword search on the web, keyword search on XML has emerged recently. Existing methods cannot resolve challenges addressed by using keyword search in Temporal XML documents. We propose a way to evaluate temporal keyword search queries over Temporal XML documents. Moreover, we propose a new ranking method based on the time-aware IR ranking methods to rank temporal keyword search queries results. Extensive experiments have been ...

  12. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  13. An Extended Role Based Access Control Method for XML Documents

    Institute of Scientific and Technical Information of China (English)

    MENG Xiao-feng; LUO Dao-feng; OU Jian-bo

    2004-01-01

    As XML has been increasingly important as the Data-change format of Internet and Intranet, access-control-on-XML-properties rises as a new issue.Role-based access control (RBAC) is an access control method that has been widely used in Internet, Operation System and Relation Data Base these 10 years.Though RBAC is already relatively mature in the above fields, new problems occur when it is used in XML properties.This paper proposes an integrated model to resolve these problems, after the fully analysis on the features of XML and RBAC.

  14. XML — an opportunity for data standards in the geosciences

    Science.gov (United States)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  15. An XML-Enabled Data Mining Query Language XML-DMQL

    NARCIS (Netherlands)

    Feng, L.; Dillon, T.

    2005-01-01

    Inspired by the good work of Han et al. (1996) and Elfeky et al. (2001) on the design of data mining query languages for relational and object-oriented databases, in this paper, we develop an expressive XML-enabled data mining query language by extension of XQuery. We first describe some

  16. A Layered View Model for XML Repositories and XML Data Warehouses

    NARCIS (Netherlands)

    Rajugan, R.; Chang, E.; Dillon, T.; Feng, L.

    The Object-Oriented (OO) conceptual models have the power in describing and modeling real-world data semantics and their inter-relationships in a form that is precise and comprehensible to users. Conversely, XML is fast emerging as the dominant standard for storing, describing and interchanging data

  17. Information persistence using XML database technology

    Science.gov (United States)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and

  18. Rosetta Ligand docking with flexible XML protocols.

    Science.gov (United States)

    Lemmon, Gordon; Meiler, Jens

    2012-01-01

    RosettaLigand is premiere software for predicting how a protein and a small molecule interact. Benchmark studies demonstrate that 70% of the top scoring RosettaLigand predicted interfaces are within 2Å RMSD from the crystal structure [1]. The latest release of Rosetta ligand software includes many new features, such as (1) docking of multiple ligands simultaneously, (2) representing ligands as fragments for greater flexibility, (3) redesign of the interface during docking, and (4) an XML script based interface that gives the user full control of the ligand docking protocol.

  19. A Simple XML Producer-Consumer Protocol

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of

  20. The CostGlue XML Schema

    OpenAIRE

    Furfari, Francesco; Potort?, Francesco; Savić, Dragan

    2008-01-01

    An XML schema for scientific metadata is described. It is used for the CostGlue archival program, developed in the framework of the European Union COST Action 285: "Modelling and simulation tools for research in emerging multi-service telecommunications". The schema is freely available under the GNU LGPL license at http://wnet.isti.cnr.it/software/costglue/schema/2007/CostGlue.xsd, or at its official repository, at http://lt.fe.uni-lj. si/costglue/schema/2007/costglue.xsd.

  1. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Metadata Encoding and Transmission Standard (METS). To find datasets in future portals and to make use of this data in own scientific work, proper selection of discovery metadata and application metadata is very important. Some XML-metadata profiles are not suitable for preservation, because version changes are very fast and make it nearly impossible to automate the migration. For other XML-metadata profiles schema definitions are changed after publication of the profile or the schema definitions become inaccessible, which might cause problems during validation of the metadata inside the preservation system [2]. Some metadata profiles are not used widely enough and might not even exist in the future. Eventually, discovery and application metadata have to be embedded into the mdWrap-subtree of the METS-XML. [1] http://www.archivematica.org [2] http://dx.doi.org/10.2218/ijdc.v7i1.215

  2. A generic framework for extracting XML data from legacy databases

    NARCIS (Netherlands)

    Thiran, Ph.; Estiévenart, F.; Hainaut, J.L.; Houben, G.J.P.M.

    2005-01-01

    This paper describes a generic framework of which semantics-based XML data can be derived from legacy databases. It consists in first recovering the conceptual schema of the database through reverse engineering techniques, and then in converting this schema, or part of it, into XML-compliant data

  3. Algebra-Based Optimization of XML-Extended OLAP Queries

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    In today’s OLAP systems, integrating fast changing data physically into a cube is complex and time-consuming. Our solution, the “OLAP-XML Federation System,” makes it possible to reference the fast changing data in XML format in OLAP queries without physical integration. In this paper, we introduce...

  4. Interpreting XML documents via an RDF schema ontology

    NARCIS (Netherlands)

    Klein, Michel

    2002-01-01

    Many business documents are represented in XML. However XML only describes the structure of data, not its meaning. The meaning of data is required for advanced automated processing, as is envisaged in the "Semantic Web". Ontologies are often used to describe the meaning of data items. Many ontology

  5. A Runtime System for XML Transformations in Java

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Kirkegaard, Christian; Møller, Anders

    2004-01-01

    We show that it is possible to extend a general-purpose programming language with a convenient high-level data-type for manipulating XML documents while permitting (1) precise static analysis for guaranteeing validity of the constructed XML documents relative to the given DTD schemas, and (2...

  6. Streaming-based verification of XML signatures in SOAP messages

    DEFF Research Database (Denmark)

    Somorovsky, Juraj; Jensen, Meiko; Schwenk, Jörg

    2010-01-01

    approach for XML processing, the Web Services servers easily become a target of Denial-of-Service attacks. We present a solution for these problems: an external streaming-based WS-Security Gateway. Our implementation is capable of processing XML Signatures in SOAP messages using a streaming-based approach...

  7. Adaptive Hypermedia Educational System Based on XML Technologies.

    Science.gov (United States)

    Baek, Yeongtae; Wang, Changjong; Lee, Sehoon

    This paper proposes an adaptive hypermedia educational system using XML technologies, such as XML, XSL, XSLT, and XLink. Adaptive systems are capable of altering the presentation of the content of the hypermedia on the basis of a dynamic understanding of the individual user. The user profile can be collected in a user model, while the knowledge…

  8. Adding XML to the MIS Curriculum: Lessons from the Classroom

    Science.gov (United States)

    Wagner, William P.; Pant, Vik; Hilken, Ralph

    2008-01-01

    eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…

  9. A Typed Text Retrieval Query Language for XML Documents.

    Science.gov (United States)

    Colazzo, Dario; Sartiani, Carlo; Albano, Antonio; Manghi, Paolo; Ghelli, Giorgio; Lini, Luca; Paoli, Michele

    2002-01-01

    Discussion of XML focuses on a description of Tequyla-TX, a typed text retrieval query language for XML documents that can search on both content and structures. Highlights include motivations; numerous examples; word-based and char-based searches; tag-dependent full-text searches; text normalization; query algebra; data models and term language;…

  10. An XML-Based Protocol for Distributed Event Services

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the application of an XML (extensible mark-up language)-based protocol to the developing field of distributed processing by way of a computational grid which resembles an electric power grid. XML tags would be used to transmit events between the participants of a transaction, namely, the consumer and the producer of the grid scheme.

  11. EquiX-A Search and Query Language for XML.

    Science.gov (United States)

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  12. The appropriateness of XML for diagnostic description

    Energy Technology Data Exchange (ETDEWEB)

    Neto, A. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal)], E-mail: andre.neto@cfn.ist.utl.pt; Lister, J.B. [CRPP-EPFL, Association EURATOM-Confederation Suisse, 1015 Lausanne (Switzerland); Fernandes, H. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Yonekawa, I. [JAEA, Japan Atomic Energy Agency Naka (Japan); Varandas, C.A.F. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal)

    2007-10-15

    A standard for the self-description of fusion plasma diagnostics will be required in the near future. The motivation is to maintain and organize the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money. Since there is no existing standard to organize this kind of information, every EU Association stores and organizes each experiment in different ways. This can lead to severe problems when the particular organization schema is poorly documented. Standardization is the key to solve these problems. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalization (information at least in English and a local language). This problem will be met on the ITER project, for which a solution is essential. A strong candidate solution is the Extensible Markup Language (XML). In this paper, a review of the current status of XML related technologies will be presented.

  13. The appropriateness of XML for diagnostic description

    International Nuclear Information System (INIS)

    Neto, A.; Lister, J.B.; Fernandes, H.; Yonekawa, I.; Varandas, C.A.F.

    2007-01-01

    A standard for the self-description of fusion plasma diagnostics will be required in the near future. The motivation is to maintain and organize the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money. Since there is no existing standard to organize this kind of information, every EU Association stores and organizes each experiment in different ways. This can lead to severe problems when the particular organization schema is poorly documented. Standardization is the key to solve these problems. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalization (information at least in English and a local language). This problem will be met on the ITER project, for which a solution is essential. A strong candidate solution is the Extensible Markup Language (XML). In this paper, a review of the current status of XML related technologies will be presented

  14. The Big Bang - XML expanding the information universe

    International Nuclear Information System (INIS)

    Rutt, S.; Chamberlain, M.; Buckley, G.

    2004-01-01

    The XML language is discussed as a tool in the information management. Industries are adopting XML as a means of making disparate systems talk with each other or as a means of swapping information between different organisations and different operating systems by using a common set of mark-up. More important to this discussion is the ability to use XML within the field of Technical Documentation and Publication. The capabilities of XML in work with different types of documents are presented. In conclusion, a summary is given of the benefits of using an XML solution: Precisely match your requirements at no more initial cost; Single Source Dynamic Content Delivery and Management; 100% of authors time is spent creating content; Content is no longer locked into its format; Reduced hardware and data storage requirements; Content survives the publishing lifecycle; Auto-versioning/release management control; Workflows can be mapped and electronic audit trails made

  15. An XML-hierarchical data structure for ENSDF

    International Nuclear Information System (INIS)

    Hurst, Aaron M.

    2016-01-01

    A data structure based on an eXtensible Markup Language (XML) hierarchy according to experimental nuclear structure data in the Evaluated Nuclear Structure Data File (ENSDF) is presented. A Python-coded translator has been developed to interpret the standard one-card records of the ENSDF datasets, together with their associated quantities defined according to field position, and generate corresponding representative XML output. The quantities belonging to this mixed-record format are described in the ENSDF manual. Of the 16 ENSDF records in total, XML output has been successfully generated for 15 records. An XML-translation for the Comment Record is yet to be implemented; this will be considered in a separate phase of the overall translation effort. Continuation records, not yet implemented, will also be treated in a future phase of this work. Several examples are presented in this document to illustrate the XML schema and methods for handling the various ENSDF data types. However, the proposed nomenclature for the XML elements and attributes need not necessarily be considered as a fixed set of constructs. Indeed, better conventions may be suggested and a consensus can be achieved amongst the various groups of people interested in this project. The main purpose here is to present an initial phase of the translation effort to demonstrate the feasibility of interpreting ENSDF datasets and creating a representative XML-structured hierarchy for data storage.

  16. Modeling the Arden Syntax for medical decisions in XML.

    Science.gov (United States)

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  17. Principles of reusability of XML-based enterprise documents

    Directory of Open Access Journals (Sweden)

    Roman Malo

    2010-01-01

    Full Text Available XML (Extensible Markup Language represents one of flexible platforms for processing enterprise documents. Its simple syntax and powerful software infrastructure for processing this type of documents is a guarantee for high interoperability of individual documents. XML is today one of technologies influencing all aspects of ICT area.In the paper questions and basic principles of reusing XML-based documents are described in the field of enterprise documents. If we use XML databases or XML data types for storing these types of documents then partial redundancy could be expected due to possible documents’ similarity. This similarity can be found especially in documents’ structure and also in documents’ content and its elimination is necessary part of data optimization.The main idea of the paper is focused to possibilities how to think about dividing complex XML docu­ments into independent fragments that can be used as standalone documents and how to process them.Conclusions could be applied within software tools working with XML-based structured data and documents as document management systems or content management systems.

  18. ECG and XML: an instance of a possible XML schema for the ECG telemonitoring.

    Science.gov (United States)

    Di Giacomo, Paola; Ricci, Fabrizio L

    2005-03-01

    Management of many types of chronic diseases relies heavily on patients' self-monitoring of their disease conditions. In recent years, Internet-based home telemonitoring systems allowing transmission of patient data to a central database and offering immediate access to the data by the care providers have become available. The adoption of Extensible Mark-up Language (XML) as a W3C standard has generated considerable interest in the potential value of this language in health informatics. However, the telemonitoring systems often work with only one or a few types of medical devices. This is because different medical devices produce different types of data, and the existing telemonitoring systems are generally built around a proprietary data schema. In this paper, we describe a generic data schema for a telemonitoring system that is applicable to different types of medical devices and different diseases, and then we present an architecture for the exchange of clinical information as data, signals of telemonitoring and clinical reports in the XML standard, up-to-date information in each electronic patient record and integration in real time with the information collected during the telemonitoring activities in the XML schema, between all the structures involved in the healthcare process of the patient.

  19. Towards privacy-preserving XML transformation

    DEFF Research Database (Denmark)

    Jensen, Meiko; Kerschbaum, Florian

    2011-01-01

    In composite web services one can only either hide the identities of the participants or provide end-to-end confidentiality via encryption. For a designer of inter organizational business processes this implies that she either needs to reveal her suppliers or force her customers to reveal...... their information. In this paper we present a solution to the encrypted data modification problem and reconciliate this apparent conflict. Using a generic sender-transformer-recipient example scenario, we illustrate the steps required for applying XML transformations to encrypted data, present the cryptographic...... building blocks, and give an outlook on advantages and weaknesses of the proposed encryption scheme. The transformer is then able to offer composite services without itself learning the content of the messages....

  20. A Study of XML in the Library Science Curriculum in Taiwan and South East Asia

    Science.gov (United States)

    Chang, Naicheng; Huang, Yuhui; Hopkinson, Alan

    2011-01-01

    This paper aims to investigate the current XML-related courses available in 96 LIS schools in South East Asia and Taiwan's 9 LIS schools. Also, this study investigates the linkage of library school graduates in Taiwan who took different levels of XML-related education (that is XML arranged as an individual course or XML arranged as a section unit…

  1. Using XML to Separate Content from the Presentation Software in eLearning Applications

    Science.gov (United States)

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  2. Integrity Checking and Maintenance with Active Rules in XML Databases

    DEFF Research Database (Denmark)

    Christiansen, Henning; Rekouts, Maria

    2007-01-01

    While specification languages for integrity constraints for XML data have been considered in the literature, actual technologies and methodologies for checking and maintaining integrity are still in their infancy. Triggers, or active rules, which are widely used in previous technologies for the p...... updates, the method indicates trigger conditions and correctness criteria to be met by the trigger code supplied by a developer or possibly automatic methods. We show examples developed in the Sedna XML database system which provides a running implementation of XML triggers....

  3. δ-dependency for privacy-preserving XML data publishing.

    Science.gov (United States)

    Landberg, Anders H; Nguyen, Kinh; Pardede, Eric; Rahayu, J Wenny

    2014-08-01

    An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1-3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3(1)) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ-dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to "quasi-identifiers"). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis

  4. Work orders management based on XML file in printing

    Directory of Open Access Journals (Sweden)

    Ran Peipei

    2018-01-01

    Full Text Available The Extensible Markup Language (XML technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.

  5. Application of XML in real-time data warehouse

    Science.gov (United States)

    Zhao, Yanhong; Wang, Beizhan; Liu, Lizhao; Ye, Su

    2009-07-01

    At present, XML is one of the most widely-used technologies of data-describing and data-exchanging, and the needs for real-time data make real-time data warehouse a popular area in the research of data warehouse. What effects can we have if we apply XML technology to the research of real-time data warehouse? XML technology solves many technologic problems which are impossible to be addressed in traditional real-time data warehouse, and realize the integration of OLAP (On-line Analytical Processing) and OLTP (Online transaction processing) environment. Then real-time data warehouse can truly be called "real time".

  6. Static Analysis for Event-Based XML Processing

    DEFF Research Database (Denmark)

    Møller, Anders

    2008-01-01

    Event-based processing of XML data - as exemplified by the popular SAX framework - is a powerful alternative to using W3C's DOM or similar tree-based APIs. The event-based approach is a streaming fashion with minimal memory consumption. This paper discusses challenges for creating program analyses...... for SAX applications. In particular, we consider the problem of statically guaranteeing the a given SAX program always produces only well-formed and valid XML output. We propose an analysis technique based on ecisting anglyses of Servlets, string operations, and XML graphs....

  7. XML-based information system for planetary sciences

    Science.gov (United States)

    Carraro, F.; Fonte, S.; Turrini, D.

    2009-04-01

    EuroPlaNet (EPN in the following) has been developed by the planetological community under the "Sixth Framework Programme" (FP6 in the following), the European programme devoted to the improvement of the European research efforts through the creation of an internal market for science and technology. The goal of the EPN programme is the creation of a European network aimed to the diffusion of data produced by space missions dedicated to the study of the Solar System. A special place within the EPN programme is that of I.D.I.S. (Integrated and Distributed Information Service). The main goal of IDIS is to offer to the planetary science community a user-friendly access to the data and information produced by the various types of research activities, i.e. Earth-based observations, space observations, modeling, theory and laboratory experiments. During the FP6 programme IDIS development consisted in the creation of a series of thematic nodes, each of them specialized in a specific scientific domain, and a technical coordination node. The four thematic nodes are the Atmosphere node, the Plasma node, the Interiors & Surfaces node and the Small Bodies & Dust node. The main task of the nodes have been the building up of selected scientific cases related with the scientific domain of each node. The second work done by EPN nodes have been the creation of a catalogue of resources related to their main scientific theme. Both these efforts have been used as the basis for the development of the main IDIS goal, i.e. the integrated distributed service. An XML-based data model have been developed to describe resources using meta-data and to store the meta-data within an XML-based database called eXist. A search engine has been then developed in order to allow users to search resources within the database. Users can select the resource type and can insert one or more values or can choose a value among those present in a list, depending on selected resource. The system searches for all

  8. XML-RPC技术及其应用分析%Analysis of XML-RPC Technology and Its Application

    Institute of Scientific and Technical Information of China (English)

    姚鹤岭

    2005-01-01

    为了说明XML-RPC技术在特定场合的应用价值,介绍了基于XML语言的XML-RPC分布式技术的概念与特点,在编写Meerkat客户端程序时,使用Python语言实现了类似ArcWeb服务的功能.研究表明:XML-RPC技术在一定条件下能够很好地满足不同应用间的通信与互操作的需求.

  9. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  10. XML DTD and Schemas for HDF-EOS

    Science.gov (United States)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  11. The XML approach to implementing space link extension service management

    Science.gov (United States)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  12. Semi-automatic Citation Correction with Lemon8-XML

    Directory of Open Access Journals (Sweden)

    MJ Suhonos

    2009-03-01

    Full Text Available The Lemon8-XML software application, developed by the Public Knowledge Project (PKP, provides an open-source, computer-assisted interface for reliable citation structuring and validation. Lemon8-XML combines citation parsing algorithms with freely-available online indexes such as PubMed, WorldCat, and OAIster. Fully-automated markup of entire bibliographies may be a genuine possibility using this approach. Automated markup of citations would increase bibliographic accuracy while reducing copyediting demands.

  13. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  14. CSchema: A Downgrading Policy Language for XML Access Control

    Institute of Scientific and Technical Information of China (English)

    Dong-Xi Liu

    2007-01-01

    The problem of regulating access to XML documents has attracted much attention from both academic and industry communities.In existing approaches, the XML elements specified by access policies are either accessible or inac-cessible according to their sensitivity.However, in some cases, the original XML elements are sensitive and inaccessible, but after being processed in some appropriate ways, the results become insensitive and thus accessible.This paper proposes a policy language to accommodate such cases, which can express the downgrading operations on sensitive data in XML documents through explicit calculations on them.The proposed policy language is called calculation-embedded schema (CSchema), which extends the ordinary schema languages with protection type for protecting sensitive data and specifying downgrading operations.CSchema language has a type system to guarantee the type correctness of the embedded calcula-tion expressions and moreover this type system also generates a security view after type checking a CSchema policy.Access policies specified by CSchema are enforced by a validation procedure, which produces the released documents containing only the accessible data by validating the protected documents against CSchema policies.These released documents are then ready tobe accessed by, for instance, XML query engines.By incorporating this validation procedure, other XML processing technologies can use CSchema as the access control module.

  15. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    Science.gov (United States)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  16. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  17. Encoding of coordination complexes with XML.

    Science.gov (United States)

    Vinoth, P; Sankar, P

    2017-09-01

    An in-silico system to encode structure, bonding and properties of coordination complexes is developed. The encoding is achieved through a semantic XML markup frame. Composition of the coordination complexes is captured in terms of central atom and ligands. Structural information of central atom is detailed in terms of electron status of valence electron orbitals. The ligands are encoded with specific reference to the electron environment of ligand centre atoms. Behaviour of ligands to form low or high spin complexes is accomplished by assigning a Ligand Centre Value to every ligand based on the electronic environment of ligand centre atom. Chemical ontologies are used for categorization purpose and to control different hybridization schemes. Complexes formed by the central atoms of transition metal, non-transition elements belonging to s-block, p-block and f-block are encoded with a generic encoding platform. Complexes of homoleptic, heteroleptic and bridged types are also covered by this encoding system. Utility of the encoded system to predict redox electron transfer reaction in the coordination complexes is demonstrated with a simple application. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Realization Of Algebraic Processor For XML Documents Processing

    International Nuclear Information System (INIS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2010-01-01

    In this paper, are presented some possibilities concerning the implementation of an algebraic method for XML hierarchical data processing which makes faster the XML search mechanism. Here is offered a different point of view for creation of advanced algebraic processor (with all necessary software tools and programming modules respectively). Therefore, this nontraditional approach for fast XML navigation with the presented algebraic processor may help to build an easier user-friendly interface provided XML transformations, which can avoid the difficulties in the complicated language constructions of XSL, XSLT and XPath. This approach allows comparatively simple search of XML hierarchical data by means of the following types of functions: specification functions and so named build-in functions. The choice of programming language Java may appear strange at first, but it isn't when you consider that the applications can run on different kinds of computers. The specific search mechanism based on the linear algebra theory is faster in comparison with MSXML parsers (on the basis of the developed examples with about 30%). Actually, there exists the possibility for creating new software tools based on the linear algebra theory, which cover the whole navigation and search techniques characterizing XSLT/XPath. The proposed method is able to replace more complicated operations in other SOA components.

  19. The SGML Standardization Framework and the Introduction of XML

    Science.gov (United States)

    Grütter, Rolf

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future. PMID:11720931

  20. XML-BSPM: an XML format for storing Body Surface Potential Map recordings.

    Science.gov (United States)

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-05-14

    The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats

  1. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  2. Generating XML schemas for DICOM structured reporting templates.

    Science.gov (United States)

    Zhao, Luyin; Lee, Kwok Pun; Hu, Jingkun

    2005-01-01

    In this paper, the authors describe a methodology to transform programmatically structured reporting (SR) templates defined by the Digital Imaging and Communications for Medicine (DICOM) standard into an XML schema representation. Such schemas can be used in the creation and validation of XML-encoded SR documents that use templates. Templates are a means to put additional constraints on an SR document to promote common formats for specific reporting applications or domains. As the use of templates becomes more widespread in the production of SR documents, it is important to ensure validity of such documents. The work described in this paper is an extension of the authors' previous work on XML schema representation for DICOM SR. Therefore, this paper inherits and partially modifies the structure defined in the earlier work.

  3. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  4. The Design Space of Type Checkers for XML Transformation Languages

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    2005-01-01

    We survey work on statically type checking XML transformations, covering a wide range of notations and ambitions. The concept of type may vary from idealizations of DTD to full-blown XML Schema or even more expressive formalisms. The notion of transformation may vary from clean and simple...... transductions to domain-specific languages or integration of XML in general-purpose programming languages. Type annotations can be either explicit or implicit, and type checking ranges from exact decidability to pragmatic approximations. We characterize and evaluate existing tools in this design space......, including a recent result of the authors providing practical type checking of full unannotated XSLT 1.0 stylesheets given general DTDs that describe the input and output languages....

  5. Design of XML-based plant data model

    International Nuclear Information System (INIS)

    Nair, Preetha M.; Padmini, S.; Gaur, Swati; Diwakar, M.P.

    2013-01-01

    XML has emerged as an open standard for exchanging structured data on various platforms to handle rich, nested, complex data structures. XML with its flexible tree-like data structure allows a more natural representation as compared to traditional databases. In this paper we present data model for plant data acquisition systems captured using XML technologies. Plant data acquisition systems in a typical Nuclear Power Plant consists of embedded nodes at the first tier and operator consoles at the second tier for operator operation, interaction and display of Plant parameters. This paper discusses a generic data model that was designed to capture process, network architecture, communication/interface protocol and diagnostics aspects required for a Nuclear Power Plant. (author)

  6. Overview of the INEX 2008 XML Mining Track

    Science.gov (United States)

    Denoyer, Ludovic; Gallinari, Patrick

    We describe here the XML Mining Track at INEX 2008. This track was launched for exploring two main ideas: first identifying key problems for mining semi-structured documents and new challenges of this emerging field and second studying and assessing the potential of machine learning techniques for dealing with generic Machine Learning (ML) tasks in the structured domain i.e. classification and clustering of semi structured documents. This year, the track focuses on the supervised classification and the unsupervised clustering of XML documents using link information. We consider a corpus of about 100,000 Wikipedia pages with the associated hyperlinks. The participants have developed models using the content information, the internal structure information of the XML documents and also the link information between documents.

  7. Development Life Cycle and Tools for XML Content Models

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Buhwan, Jeong [POSTECH University, South Korea; Goyal, Puja [National Institute of Standards and Technology (NIST)

    2004-11-01

    Many integration projects today rely on shared semantic models based on standards represented using Extensible Mark up Language (XML) technologies. Shared semantic models typically evolve and require maintenance. In addition, to promote interoperability and reduce integration costs, the shared semantics should be reused as much as possible. Semantic components must be consistent and valid in terms of agreed upon standards and guidelines. In this paper, we describe an activity model for creation, use, and maintenance of a shared semantic model that is coherent and supports efficient enterprise integration. We then use this activity model to frame our research and the development of tools to support those activities. We provide overviews of these tools primarily in the context of the W3C XML Schema. At the present, we focus our work on the W3C XML Schema as the representation of choice, due to its extensive adoption by industry.

  8. XML in an Adaptive Framework for Instrument Control

    Science.gov (United States)

    Ames, Troy J.

    2004-01-01

    NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.

  9. Embedded XML DOM Parser: An Approach for XML Data Processing on Networked Embedded Systems with Real-Time Requirements

    Directory of Open Access Journals (Sweden)

    Cavia Soto MAngeles

    2008-01-01

    Full Text Available Abstract Trends in control and automation show an increase in data processing and communication in embedded automation controllers. The eXtensible Markup Language (XML is emerging as a dominant data syntax, fostering interoperability, yet little is still known about how to provide predictable real-time performance in XML processing, as required in the domain of industrial automation. This paper presents an XML processor that is designed with such real-time performance in mind. The publication attempts to disclose insight gained in applying techniques such as object pooling and reuse, and other methods targeted at avoiding dynamic memory allocation and its consequent memory fragmentation. Benchmarking tests are reported in order to illustrate the benefits of the approach.

  10. 77 FR 46986 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Science.gov (United States)

    2012-08-07

    ... Supplementary Information Section below for details. DATES: The draft XML Schema is now available at the links...] Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema AGENCY: Federal... Regulatory Commission is making available on its Web site ( http://www.ferc.gov ) a draft of the XML schema...

  11. Applying Analogical Reasoning Techniques for Teaching XML Document Querying Skills in Database Classes

    Science.gov (United States)

    Mitri, Michel

    2012-01-01

    XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…

  12. An XML description of detector geometries for GEANT4

    International Nuclear Information System (INIS)

    Figgins, J.; Walker, B.; Comfort, J.R.

    2006-01-01

    A code has been developed that enables the geometry of detectors to be specified easily and flexibly in the XML language, for use in the Monte Carlo program GEANT4. The user can provide clear documentation of the geometry without being proficient in the C++ language of GEANT4. The features and some applications are discussed

  13. IMPROVING THE VIRTUAL LEARNING DEVELOPMENT PROCESSES USING XML STANDARDS

    Directory of Open Access Journals (Sweden)

    Kurt Suss

    2002-06-01

    Full Text Available Distributed Icarning environments and content often lack a common basis for the cxchange of learning materials. This delays, or even hinders, both innovation and delivery of learning tecnology. Standards for platforms and authoring may provide a way to improve interoperability and cooperative development. This article provides an XML-based approach to this problem creaied by the IMS Global Learning Consortium.

  14. The XBabelPhish MAGE-ML and XML translator.

    Science.gov (United States)

    Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A

    2008-01-18

    MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.

  15. Castles Made of Sand: Building Sustainable Digitized Collections Using XML.

    Science.gov (United States)

    Ragon, Bart

    2003-01-01

    Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)

  16. A Conversion Tool for Mathematical Expressions in Web XML Files.

    Science.gov (United States)

    Ohtake, Nobuyuki; Kanahori, Toshihiro

    2003-01-01

    This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…

  17. Improving the Virtual Learning Development Processes Using XML Standards.

    Science.gov (United States)

    Suss, Kurt; Oberhofer, Thomas

    2002-01-01

    Suggests that distributed learning environments and content often lack a common basis for the exchange of learning materials, which can hinder or even delay innovation and delivery of learning technology. Standards for platforms and authoring may provide a way to improve interoperability and cooperative development. Provides an XML-based approach…

  18. A Database Approach to Content-based XML retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2003-01-01

    This paper describes a rst prototype system for content-based retrieval from XML data. The system's design supports both XPath queries and complex information retrieval queries based on a language modelling approach to information retrieval. Evaluation using the INEX benchmark shows that it is

  19. XML/TEI Stand-off Markup. One step beyond.

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena

    2018-01-01

    Stand-off markup is widely considered as a possible solution for overcoming the limitation of inline XML markup, primarily dealing with multiple overlapping hierarchies. Considering previous contributions on the subject and implementations of stand-off markup, we propose a new TEI-based model for

  20. Personalization of XML Content Browsing Based on User Preferences

    Science.gov (United States)

    Encelle, Benoit; Baptiste-Jessel, Nadine; Sedes, Florence

    2009-01-01

    Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. In this direction, we introduce concepts that result in the generation of personalized multimodal user interfaces for browsing XML content. User requirements concerning the browsing of a specific content type can be specified by means of…

  1. An XML-based communication protocol for accelerator distributed controls

    International Nuclear Information System (INIS)

    Catani, L.

    2008-01-01

    This paper presents the development of XMLvRPC, an RPC-like communication protocol based, for this particular application, on the TCP/IP and XML (eXtensible Markup Language) tools built-in in LabVIEW. XML is used to format commands and data passed between client and server while socket interface for communication uses either TCP or UDP transmission protocols. This implementation extends the features of these general purpose libraries and incorporates solutions that might provide, with limited modifications, full compatibility with well established and more general communication protocol, i.e. XML-RPC, while preserving portability to different platforms supported by LabVIEW. The XMLvRPC suite of software has been equipped with specific tools for its deployment in distributed control systems as, for instance, a quasi-automatic configuration and registration of the distributed components and a simple plug-and-play approach to the installation of new services. Key feature is the management of large binary arrays that allow coding of large binary data set, e.g. raw images, more efficiently with respect to the standard XML coding

  2. An XML-based communication protocol for accelerator distributed controls

    Energy Technology Data Exchange (ETDEWEB)

    Catani, L. [INFN-Roma Tor Vergata, Rome (Italy)], E-mail: luciano.catani@roma2.infn.it

    2008-03-01

    This paper presents the development of XMLvRPC, an RPC-like communication protocol based, for this particular application, on the TCP/IP and XML (eXtensible Markup Language) tools built-in in LabVIEW. XML is used to format commands and data passed between client and server while socket interface for communication uses either TCP or UDP transmission protocols. This implementation extends the features of these general purpose libraries and incorporates solutions that might provide, with limited modifications, full compatibility with well established and more general communication protocol, i.e. XML-RPC, while preserving portability to different platforms supported by LabVIEW. The XMLvRPC suite of software has been equipped with specific tools for its deployment in distributed control systems as, for instance, a quasi-automatic configuration and registration of the distributed components and a simple plug-and-play approach to the installation of new services. Key feature is the management of large binary arrays that allow coding of large binary data set, e.g. raw images, more efficiently with respect to the standard XML coding.

  3. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  4. Web geoprocessing services on GML with a fast XML database

    African Journals Online (AJOL)

    user

    tasks on those data and return response messages and/or data outputs. To achieve an efficient ..... Even though GML data is based on XML data model and can be ..... language, and a stylesheet language CSS (Cascading Style Sheets). It is.

  5. Web-based infectious disease reporting using XML forms.

    Science.gov (United States)

    Liu, Danhong; Wang, Xia; Pan, Feng; Xu, Yongyong; Yang, Peng; Rao, Keqin

    2008-09-01

    Exploring solutions for infectious disease information sharing among hospital and public health information systems is imperative to the improvement of disease surveillance and emergent response. This paper aimed at developing a method to directly transmit real-time data of notifiable infectious diseases from hospital information systems to public health information systems on the Internet by using a standard eXtensible Markup Language (XML) format. The mechanism and work flow by which notifiable infectious disease data are created, reported and used at health agencies in China was evaluated. The capacity of all participating providers to use electronic data interchange to submit transactions of data required for the notifiable infectious disease reporting was assessed. The minimum data set at national level that is required for reporting for national notifiable infectious disease surveillance was determined. The standards and techniques available worldwide for electronic health data interchange, such as XML, HL7 messaging, CDA and ATSM CCR, etc. were reviewed and compared, and an XML implementation format needed for this purpose was defined for hospitals that are able to access the Internet to provide a complete infectious disease reporting. There are 18,703 county or city hospitals in China. All of them have access to basic information infrastructures including computers, e-mail and the Internet. Nearly 10,000 hospitals possess hospital information systems used for electronically recording, retrieving and manipulating patients' information. These systems collect 23 data items required in the minimum data set for national notifiable infectious disease reporting. In order to transmit these data items to the disease surveillance system and local health information systems instantly and without duplication of data input, an XML schema and a set of standard data elements were developed to define the content, structure and semantics of the data set. These standards

  6. XML-kieliperhe tietokannan hallintajärjestelmien näkökulmasta

    OpenAIRE

    Imeläinen, Jani

    2006-01-01

    Tutkielmassa tarkastellaan XML-kieliperheen määrityksiä tietokannan hallintajärjestelmien näkökulmasta. Tutkielmassa verrataan XML-määrityksiä tietokannan hallintajärjestelmien peruskäsitteistöön ja esitellään näin rajaten olennaisimmat XML-määritykset. Päätavoitteena on selvittää XML-kieliperheen määritysten merkitys ja rooli XML-dokumenttien käsittelyssä tietokannan hallintajärjestelmissä. Tutkielman keskeinen tulos on viitekehys, jossa havainnollistetaan tietokannan halli...

  7. Progress on an implementation of MIFlowCyt in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  8. Methods and Technologies of XML Data Modeling for IP Mode Intelligent Measuring and Controlling System

    International Nuclear Information System (INIS)

    Liu, G X; Hong, X B; Liu, J G

    2006-01-01

    This paper presents the IP mode intelligent measuring and controlling system (IMIMCS). Based on object-oriented modeling technology of UML and XML Schema, the innovative methods and technologies of some key problems for XML data modeling in the IMIMCS were especially discussed, including refinement for systemic business by means of use-case diagram of UML, the confirmation of the content of XML data model and logic relationship of the objects of XML Schema with the aid of class diagram of UML, the mapping rules from the UML object model to XML Schema. Finally, the application of the IMIMCS based on XML for a modern greenhouse was presented. The results show that the modeling methods of the measuring and controlling data in the IMIMCS involving the multi-layer structure and many operating systems process strong reliability and flexibility, guarantee uniformity of complex XML documents and meet the requirement of data communication across platform

  9. A Whiter Shade of Grey: A new approach to archaeological grey literature using the XML version of the TEI Guidelines

    Directory of Open Access Journals (Sweden)

    Gail Falkingham

    2005-04-01

    detail to which the reports' structure and content has been encoded has been influenced principally by a review of user needs identified by recent national surveys and the potential for export of data for the population of other heritage datasets. Through the application of CSS and XSL stylesheets, the case study demonstrates how the reports and their content may be displayed in different ways and how selected data may be extracted from the text for input into other systems, such as Historic Environment Records and the OASIS Project database. The author came to this project as a novice in the use of XML and XSLT, and learnt far more as the case study progressed. Whilst it has been possible to achieve the desired aims, it is acknowledged that this is just a starting point; more advanced users of XSLT will, no doubt, be able to produce more sophisticated ways of applying styling and transformation. Nevertheless, it is hoped that this exploration of the potential of archaeological document markup will encourage others to use and experiment with XML. The practical elements of this paper demonstrate how XML and XSLT have the power and flexibility to open up new possibilities for the presentation of grey literature on the Web, and for the repurposing of report content, above and beyond those achievable with the proprietary file formats favoured at present. There is national interest in, and call for, the development of new methods of electronic publication for archaeological reports; it is hoped that this article will contribute to this debate.

  10. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  11. XML schemas and mark-up practices of taxonomic literature.

    Science.gov (United States)

    Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat

    2011-01-01

    We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

  12. QuakeML - An XML Schema for Seismology

    Science.gov (United States)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  13. The role of XML in the CMS detector description

    International Nuclear Information System (INIS)

    Liendl, M.; Lingen, F.van; Todorov, T.; Arce, P.; Furtjes, A.; Innocente, V.; Roeck, A. de; Case, M.

    2001-01-01

    Offline Software such as Simulation, Reconstruction, Analysis, and Visualisation are all in need of a detector description. These applications have several common but also many specific requirements for the detector description in order to build up their internal representations. To achieve this in a consistent and coherent manner a common source of information, the detector description database, will be consulted by each of the applications. The role and suitability of XML in the design of the detector description database in the scope of the CMS detector at the LHC is discussed. Different aspects such as data modelling capabilities of XML, tool support, integration to C++ representations of data models are treated and recent results of prototype implementations are presented

  14. Fuzzy Approaches to Flexible Querying in XML Retrieval

    Directory of Open Access Journals (Sweden)

    Stefania Marrara

    2016-04-01

    Full Text Available In this paper we review some approaches to flexible querying in XML that apply several techniques among which Fuzzy Set Theory. In particular we focus on FleXy, a flexible extension of XQuery-FT that was developed as a library on the open source engine Base-X. We then present PatentLight, a tool for patent retrieval that was developed to show the expressive power of Flexy.

  15. XTCE: XML Telemetry and Command Exchange Tutorial, XTCE Version 1

    Science.gov (United States)

    Rice, Kevin; Kizzort, Brad

    2008-01-01

    These presentation slides are a tutorial on XML Telemetry and Command Exchange (XTCE). The goal of XTCE is to provide an industry standard mechanism for describing telemetry and command streams (particularly from satellites.) it wiill lower cost and increase validation over traditional formats, and support exchange or native format.XCTE is designed to describe bit streams, that are typical of telemetry and command in the historic space domain.

  16. New XML-Based Files: Implications for Forensics

    Science.gov (United States)

    2009-04-01

    previously unknown social networks.4 We can use unique identi!ers that survived copying and pasting to show plagiarism . Unique identi!ers can also raise...the ODF and OOX speci!- cations to standards bodies, surprisingly few technical articles have published details about the new XML document !le...Sharp, George Dinolt, Beth Rosen- berg, and the anonymous reviewers for their comments on previous versions of this article . This work was funded in

  17. Toward a Normalized XML Schema for the GGP Data Archives

    Directory of Open Access Journals (Sweden)

    Alban Gabillon

    2013-04-01

    Full Text Available Since 1997, the Global Geodynamics Project (GGP stations have used a text-based data format. The main drawback of this type of data coding is the lack of data integrity during the data flow processing. As a result, metadata and even data must be checked by human operators. In this paper, we propose a new format for representing the GGP data. This new format is based on the eXtensible Markup Language (XML.

  18. XSemantic: An Extension of LCA Based XML Semantic Search

    Science.gov (United States)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  19. An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results.

    Science.gov (United States)

    Stanislaus, Romesh; Jiang, Liu Hong; Swartz, Martha; Arthur, John; Almeida, Jonas S

    2004-01-29

    Many proteomics initiatives require a seamless bioinformatics integration of a range of analytical steps between sample collection and systems modeling immediately assessable to the participants involved in the process. Proteomics profiling by 2D gel electrophoresis to the putative identification of differentially expressed proteins by comparison of mass spectrometry results with reference databases, includes many components of sample processing, not just analysis and interpretation, are regularly revisited and updated. In order for such updates and dissemination of data, a suitable data structure is needed. However, there are no such data structures currently available for the storing of data for multiple gels generated through a single proteomic experiments in a single XML file. This paper proposes a data structure based on XML standards to fill the void that exists between data generated by proteomics experiments and storing of data. In order to address the resulting procedural fluidity we have adopted and implemented a data model centered on the concept of annotated gel (AG) as the format for delivery and management of 2D Gel electrophoresis results. An eXtensible Markup Language (XML) schema is proposed to manage, analyze and disseminate annotated 2D Gel electrophoresis results. The structure of AG objects is formally represented using XML, resulting in the definition of the AGML syntax presented here. The proposed schema accommodates data on the electrophoresis results as well as the mass-spectrometry analysis of selected gel spots. A web-based software library is being developed to handle data storage, analysis and graphic representation. Computational tools described will be made available at http://bioinformatics.musc.edu/agml. Our development of AGML provides a simple data structure for storing 2D gel electrophoresis data.

  20. An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results

    Directory of Open Access Journals (Sweden)

    Arthur John

    2004-01-01

    Full Text Available Abstract Background Many proteomics initiatives require a seamless bioinformatics integration of a range of analytical steps between sample collection and systems modeling immediately assessable to the participants involved in the process. Proteomics profiling by 2D gel electrophoresis to the putative identification of differentially expressed proteins by comparison of mass spectrometry results with reference databases, includes many components of sample processing, not just analysis and interpretation, are regularly revisited and updated. In order for such updates and dissemination of data, a suitable data structure is needed. However, there are no such data structures currently available for the storing of data for multiple gels generated through a single proteomic experiments in a single XML file. This paper proposes a data structure based on XML standards to fill the void that exists between data generated by proteomics experiments and storing of data. Results In order to address the resulting procedural fluidity we have adopted and implemented a data model centered on the concept of annotated gel (AG as the format for delivery and management of 2D Gel electrophoresis results. An eXtensible Markup Language (XML schema is proposed to manage, analyze and disseminate annotated 2D Gel electrophoresis results. The structure of AG objects is formally represented using XML, resulting in the definition of the AGML syntax presented here. Conclusion The proposed schema accommodates data on the electrophoresis results as well as the mass-spectrometry analysis of selected gel spots. A web-based software library is being developed to handle data storage, analysis and graphic representation. Computational tools described will be made available at http://bioinformatics.musc.edu/agml. Our development of AGML provides a simple data structure for storing 2D gel electrophoresis data.

  1. Rock.XML - Towards a library of rock physics models

    Science.gov (United States)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  2. The Format Converting/Transfer Agent and Repository System based on ebXML

    Directory of Open Access Journals (Sweden)

    KyeongRim Ahn

    2004-12-01

    Full Text Available As introducing XML in EC-environment, various document formats have been used due to XML characteristic. Also, other document format except XML have been used to exchange EC-related information. That is, as increasing trading partner, as increasing exchanged document format and business processing being complex. So, management difficulty and duplication problem happened as trading partners increasing. And, they want to change plural business workflow to general and uniform form as defining and arranging BP(Business Process. Therefore, in this paper, we define XML as future document standard agreement and discuss about service system architecture and Repository. Repository stores and manages document standard, information related to Business Processing, Messaging Profile, and so on. Repository structure is designed to cover various XML standards. Also, we design system to support ebXML communication protocol, MSH, as well as traditional communication protocol, such as X.25, X.400, etc. and implement to exchange information via FTP.

  3. Lessons in scientific data interoperability: XML and the eMinerals project.

    Science.gov (United States)

    White, T O H; Bruin, R P; Chiang, G-T; Dove, M T; Tyer, R P; Walker, A M

    2009-03-13

    A collaborative environmental eScience project produces a broad range of data, notable as much for its diversity, in source and format, as its quantity. We find that extensible markup language (XML) and associated technologies are invaluable in managing this deluge of data. We describe Fo X, a toolkit for allowing Fortran codes to read and write XML, thus allowing existing scientific tools to be easily re-used in an XML-centric workflow.

  4. Single event monitoring system based on Java 3D and XML data binding

    International Nuclear Information System (INIS)

    Wang Liang; Chinese Academy of Sciences, Beijing; Zhu Kejun; Zhao Jingwei

    2007-01-01

    Online single event monitoring is important to BESIII DAQ System. Java3D is extension of Java Language in 3D technology, XML data binding is more efficient to handle XML document than SAX and DOM. This paper mainly introduce the implementation of BESIII single event monitoring system with Java3D and XML data binding, and interface for track fitting software with JNI technology. (authors)

  5. Representing nested semantic information in a linear string of text using XML.

    OpenAIRE

    Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a N...

  6. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    Science.gov (United States)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  7. Model tool to describe chemical structures in XML format utilizing structural fragments and chemical ontology.

    Science.gov (United States)

    Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran

    2010-05-24

    We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.

  8. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  9. Content Management von Leittexten mit XML Topic Maps

    Directory of Open Access Journals (Sweden)

    Johannes Busse

    2003-07-01

    Full Text Available Die Autoren definieren den Umgang mit internet- basierten Informations- und Kommunikationstechnologien als Schlüsselqualifikation für Studierende aller Fachrichtungen. Im vorliegenden Aufsatz beschreiben sie ein Projekt, das der Fachbereich Erziehungswissenschaften der Universität Heidelberg seit 2001 durchführt. Hier werden Studierende der Geistes- und Sozialwissenschaften zu "Lernberatern" ausgebildet, die als Multiplikatoren die notwendigen Kenntnisse erwerben. Die Teilnehmenden erarbeiten nach der "Leittextmethode" selbstgesteuert xml-basierte Contents. Dies setzt den Erwerb von informationstechnischen Kenntnissen voraus, der neben dem Aufbau eines (sowohl technischen als auch sozialen Netzwerks einen Schwerpunkt bildet.

  10. An XML-based configuration system for MAST PCS

    International Nuclear Information System (INIS)

    Storrs, J.; McArdle, G.

    2008-01-01

    MAST PCS, a port of General Atomics' generic Plasma Control System, is a large software system comprising many source files in C and IDL. Application parameters can affect multiple source files in complex ways, making code development and maintenance difficult. The MAST PCS configuration system aims to make the task of the application developer easier, through the use of XML-based configuration files and a configuration tool which processes them. It is presented here as an example of a useful technique with wide application

  11. Research on Heterogeneous Data Exchange based on XML

    Science.gov (United States)

    Li, Huanqin; Liu, Jinfeng

    Integration of multiple data sources is becoming increasingly important for enterprises that cooperate closely with their partners for e-commerce. OLAP enables analysts and decision makers fast access to various materialized views from data warehouses. However, many corporations have internal business applications deployed on different platforms. This paper introduces a model for heterogeneous data exchange based on XML. The system can exchange and share the data among the different sources. The method used to realize the heterogeneous data exchange is given in this paper.

  12. Automatically Generating a Distributed 3D Battlespace Using USMTF and XML-MTF Air Tasking Order, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  13. Automatically Generating a Distributed 3D Virtual Battlespace Using USMTF and XML-MTF Air Tasking Orders, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  14. The Simplest Evaluation Measures for XML Information Retrieval that Could Possibly Work

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Mihajlovic, V.

    2005-01-01

    This paper reviews several evaluation measures developed for evaluating XML information retrieval (IR) systems. We argue that these measures, some of which are currently in use by the INitiative for the Evaluation of XML Retrieval (INEX), are complicated, hard to understand, and hard to explain to

  15. XML schema matching: balancing efficiency and effectiveness by means of clustering

    NARCIS (Netherlands)

    Smiljanic, M.

    2006-01-01

    In this thesis we place our research in the scope of a tool which looks for information within XML data on the Internet. We envision a personal schema querying system which enables a user to express his information need by specifying a personal XML schema. The user can also ask queries over his

  16. 78 FR 28732 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Science.gov (United States)

    2013-05-16

    ...] Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema AGENCY: Federal... the SUPPLEMENTARY INFORMATION Section below for details. DATES: The XML is now available at the links mentioned below. FOR FURTHER INFORMATION CONTACT: Christina Switzer, Office of the General Counsel, Federal...

  17. Evaluating XML-Extended OLAP Queries Based on a Physical Algebra

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2006-01-01

    . In this paper, we extend previous work on the logical federation of OLAP and XML data sources by presenting a simplified query semantics, a physical query algebra and a robust OLAP-XML query engine as well as the query evaluation techniques. Performance experiments with a prototypical implementation suggest...

  18. Managing XML Data to optimize Performance into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-06-01

    Full Text Available This paper propose some possibilities for manage XML data in order to optimize performance into object-relational databases. It is detailed the possibility of storing XML data into such databases, using for exemplification an Oracle database and there are tested some optimizing techniques of the queries over XMLType tables, like indexing and partitioning tables.

  19. Efficient XML Interchange (EXI) Compression and Performance Benefits: Development, Implementation and Evaluation

    Science.gov (United States)

    2010-03-01

    181 a. Information Grammar Theory ( Chomsky ) ..........................181 b. Events...document. a. Information Grammar Theory ( Chomsky ) Both grammars and events are learned for each XML document by means of a supporting schema or by...processing the XML document. The learning process is similar to Chomsky grammars, a hierarchical-based formal grammar for defining a language

  20. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    Science.gov (United States)

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  1. Generic and updatable XML value indices covering equality and range lookups

    NARCIS (Netherlands)

    E. Sidirourgos (Eleftherios); P.A. Boncz (Peter)

    2008-01-01

    htmlabstractWe describe a collection of indices for XML text, element, and attribute node values that (i) consume little storage, (ii) have low maintenance overhead, (iii) permit fast equi-lookup on string values, and (iv) support range-lookup on any XML typed value (e.g., double, dateTime). The

  2. Generic and Updatable XML Value Indices Covering Equality and Range Lookups

    NARCIS (Netherlands)

    E. Sidirourgos (Eleftherios); P.A. Boncz (Peter)

    2009-01-01

    textabstractWe describe a collection of indices for XML text, element, and attribute node values that (i) consume little storage, (ii) have low maintenance overhead, (iii) permit fast equilookup on string values, and (iv) support range-lookup on any XML typed value (e.g., double, dateTime). The

  3. Evaluating XML-Extended OLAP Queries Based on a Physical Algebra

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    2004-01-01

    is desirable. In this paper, we extend previous work on the logical federation of OLAP and XML data sources by presenting a simplified query semantics,a physical query algebra and a robust OLAP-XML query engine.Performance experiments with a prototypical implementation suggest that the performance for OLAP...

  4. XPIWIT--an XML pipeline wrapper for the Insight Toolkit.

    Science.gov (United States)

    Bartschat, Andreas; Hübner, Eduard; Reischl, Markus; Mikut, Ralf; Stegmaier, Johannes

    2016-01-15

    The Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos. XPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/. johannes.stegmaier@kit.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. XML-Based Visual Specification of Multidisciplinary Applications

    Science.gov (United States)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  6. Personalising e-learning modules: targeting Rasmussen levels using XML.

    Science.gov (United States)

    Renard, J M; Leroy, S; Camus, H; Picavet, M; Beuscart, R

    2003-01-01

    The development of Internet technologies has made it possible to increase the number and the diversity of on-line resources for teachers and students. Initiatives like the French-speaking Virtual Medical University Project (UMVF) try to organise the access to these resources. But both teachers and students are working on a partly redundant subset of knowledge. From the analysis of some French courses we propose a model for knowledge organisation derived from Rasmussen's stepladder. In the context of decision-making Rasmussen has identified skill-based, rule-based and knowledge-based levels for the mental process. In the medical context of problem-solving, we apply these three levels to the definition of three students levels: beginners, intermediate-level learners, experts. Based on our model, we build a representation of the hierarchical structure of data using XML language. We use XSLT Transformation Language in order to filter relevant data according to student level and to propose an appropriate display on students' terminal. The model and the XML implementation we define help to design tools for building personalised e-learning modules.

  7. XML as a standard I/O data format in scientific software development

    International Nuclear Information System (INIS)

    Song Tianming; Yang Jiamin; Yi Rongqing

    2010-01-01

    XML is an open standard data format with strict syntax rules, which is widely used in large-scale software development. It is adopted as I/O file format in the development of SpectroSim, a simulation and data-processing system for soft x-ray spectrometer used in ICF experiments. XML data that describe spectrometer configurations, schema codes that define syntax rules for XML and report generation technique for visualization of XML data are introduced. The characteristics of XML such as the capability to express structured information, self-descriptive feature, automation of visualization are explained with examples, and its feasibility as a standard scientific I/O data file format is discussed. (authors)

  8. Using a Combination of UML, C2RM, XML, and Metadata Registries to Support Long-Term Development/Engineering

    Science.gov (United States)

    2003-01-01

    Authenticat’n (XCBF) Authorizat’n (XACML) (SAML) Privacy (P3P) Digital Rights Management (XrML) Content Mngmnt (DASL) (WebDAV) Content Syndicat’n...Registry/ Repository BPSS eCommerce XML/EDI Universal Business Language (UBL) Internet & Computing Human Resources (HR-XML) Semantic KEY XML SPECIFICATIONS

  9. XML Namespace與RDF的基本概念 | The Basic Concepts of XML Namespace and RDF

    Directory of Open Access Journals (Sweden)

    陳嵩榮 Sung-Jung Chen

    1999-04-01

    Full Text Available

    頁次:88-100

    XML Namespaces機制允許在XML文件中以一個URI 來限定元素名稱或屬性名稱,提供一種在Web上具有唯一性的命名方式,以解決不同的XML文件元素名稱與屬性名稱可能衝突的問題;RDF 主要是為Metadata在Web 上的各種應用提供一個基礎結構,使應用程式之間能夠在Web上交換Metadata,以促進網路資源的自動化處理。本文透過一連串的實例來介紹XML Namespace與RDF的資料模型及語法。

    XML namespaces provide a simple method for qualifying element and attribute names used in XML documents by associating them with namespaces identified by URI references. RDF is a foundation for processing metadata. It provides interoperability between

  10. Domain XML semantic integration based on extraction rules and ontology mapping

    Directory of Open Access Journals (Sweden)

    Huayu LI

    2016-08-01

    Full Text Available A plenty of XML documents exist in petroleum engineering field, but traditional XML integration solution can’t provide semantic query, which leads to low data use efficiency. In light of WeXML(oil&gas well XML data semantic integration and query requirement, this paper proposes a semantic integration method based on extraction rules and ontology mapping. The method firstly defines a series of extraction rules with which elements and properties of WeXML Schema are mapped to classes and properties in WeOWL ontology, respectively; secondly, an algorithm is used to transform WeXML documents into WeOWL instances. Because WeOWL provides limited semantics, ontology mappings between two ontologies are then built to explain class and property of global ontology with terms of WeOWL, and semantic query based on global domain concepts model is provided. By constructing a WeXML data semantic integration prototype system, the proposed transformational rule, the transfer algorithm and the mapping rule are tested.

  11. Converting biomolecular modelling data based on an XML representation.

    Science.gov (United States)

    Sun, Yudong; McKeever, Steve

    2008-08-25

    Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  12. Version control of pathway models using XML patches.

    Science.gov (United States)

    Saffrey, Peter; Orton, Richard

    2009-03-17

    Computational modelling has become an important tool in understanding biological systems such as signalling pathways. With an increase in size complexity of models comes a need for techniques to manage model versions and their relationship to one another. Model version control for pathway models shares some of the features of software version control but has a number of differences that warrant a specific solution. We present a model version control method, along with a prototype implementation, based on XML patches. We show its application to the EGF/RAS/RAF pathway. Our method allows quick and convenient storage of a wide range of model variations and enables a thorough explanation of these variations. Trying to produce these results without such methods results in slow and cumbersome development that is prone to frustration and human error.

  13. Simulation framework and XML detector description for the CMS experiment

    CERN Document Server

    Arce, P; Boccali, T; Case, M; de Roeck, A; Lara, V; Liendl, M; Nikitenko, A N; Schröder, M; Strässner, A; Wellisch, H P; Wenzel, H

    2003-01-01

    Currently CMS event simulation is based on GEANT3 while the detector description is built from different sources for simulation and reconstruction. A new simulation framework based on GEANT4 is under development. A full description of the detector is available, and the tuning of the GEANT4 performance and the checking of the ability of the physics processes to describe the detector response is ongoing. Its integration on the CMS mass production system and GRID is also currently under development. The Detector Description Database project aims at providing a common source of information for Simulation, Reconstruction, Analysis, and Visualisation, while allowing for different representations as well as specific information for each application. A functional prototype, based on XML, is already released. Also examples of the integration of DDD in the GEANT4 simulation and in the reconstruction applications are provided.

  14. Teaching object concepts for XML-based representations.

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, R. L. (Robert L.)

    2002-01-01

    Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.

  15. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  16. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  17. Using XML and Java Technologies for Astronomical Instrument Control

    Science.gov (United States)

    Ames, Troy; Case, Lynne; Powers, Edward I. (Technical Monitor)

    2001-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center, under the Instrument Remote Control (IRC) project, is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is that the software is driven by an instrument description, written using the Instrument Markup Language (IML), a dialect of XML. IML is used to describe the command sets and command formats of the instrument, communication mechanisms, format of the data coming from the instrument, and characteristics of the graphical user interface to control and monitor the instrument. The IRC framework allows the users to define a data analysis pipeline which converts data coming out of the instrument. The data can be used in visualizations in order for the user to assess the data in real-time, if necessary. The data analysis pipeline algorithms can be supplied by the user in a variety of forms or programming languages. Although the current integration effort is targeted for the High-resolution Airborne Wideband Camera (HAWC) and the Submillimeter and Far Infrared Experiment (SAFIRE), first-light instruments of the Stratospheric Observatory for Infrared Astronomy (SOFIA), the framework is designed to be generic and extensible so that it can be applied to any instrument. Plans are underway to test the framework

  18. XML as a cross-platform representation for medical imaging with fuzzy algorithms.

    Science.gov (United States)

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2011-01-01

    Machines that perform linguistic medical image interpretation are based on fuzzy algorithms. There are several frameworks that can edit and simulate fuzzy algorithms, but they are not compatible with most of the implemented applications. This paper suggests a representation for fuzzy algorithms in XML files, and using this XML as a cross-platform between the simulation framework and the software applications. The paper presents a parsing algorithm that can convert files created by simulation framework, and converts them dynamically into an XML file keeping the original logical structure of the files.

  19. Data Hiding and Security for XML Database: A TRBAC- Based Approach

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wan-song; SUN Wei; LIU Da-xin

    2005-01-01

    In order to cope with varying protection granularity levels of XML (eXtensible Markup Language) documents, we propose a TXAC (Two-level XML Access Control) framework, in which an extended TRBAC (Temporal Role-Based Access Control) approach is proposed to deal with the dynamic XML data. With different system components,TXAC algorithm evaluates access requests efficiently by appropriate access control policy in dynamic web environment.The method is a flexible and powerful security system offering a multi-level access control solution.

  20. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  1. Secure combination of XML signature application with message aggregation in multicast settings

    DEFF Research Database (Denmark)

    Becker, Andreas; Jensen, Meiko

    2013-01-01

    The similarity-based aggregation of XML documents is a proven method for reducing network traffic. However, when used in conjunction with XML security standards, a lot of pitfalls, but also optimization potentials exist. In this paper, we investigate these issues, showing how to exploit similarity......-based aggregation for rapid distribution of digitally signed XML data. Using our own implementation in two different experimental settings, we provide both a thorough evaluation and a security proof for our approach. By this we prove both feasibility and security, and we illustrate how to achieve a network traffic...

  2. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  3. The realization of the storage of XML and middleware-based data of electronic medical records

    International Nuclear Information System (INIS)

    Liu Shuzhen; Gu Peidi; Luo Yanlin

    2007-01-01

    In this paper, using the technology of XML and middleware to design and implement a unified electronic medical records storage archive management system and giving a common storage management model. Using XML to describe the structure of electronic medical records, transform the medical data from traditional 'business-centered' medical information into a unified 'patient-centered' XML document and using middleware technology to shield the types of the databases at different departments of the hospital and to complete the information integration of the medical data which scattered in different databases, conducive to information sharing between different hospitals. (authors)

  4. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  5. Using XML technology for the ontology-based semantic integration of life science databases.

    Science.gov (United States)

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  6. Phase II-SOF Knowledge Coupler-Based Phase I XML Schema

    National Research Council Canada - National Science Library

    Whitlock, Warren L

    2005-01-01

    ... a list of diagnostic choices in an XML-tagged database. An analysis of the search function indicates that the native search capability of the SOFMH does not inherently contain the requirements to sustain a diagnostic tool...

  7. A Survey and Analysis of Access Control Architectures for XML Data

    National Research Council Canada - National Science Library

    Estlund, Mark J

    2006-01-01

    .... Business uses XML to leverage the full potential of the Internet for e-Commerce. The government wants to leverage the ability to share information across many platforms between divergent agencies...

  8. Representing nested semantic information in a linear string of text using XML.

    Science.gov (United States)

    Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.

  9. XML as a format of expression of Object-Oriented Petri Nets

    Directory of Open Access Journals (Sweden)

    Petr Jedlička

    2004-01-01

    Full Text Available A number of object-oriented (OO variants have so far been devised for Petri Nets (PN. However, none of these variants has ever been described using an open, independent format – such as XML. This article suggests several possibilities and advantages of such a description. The outlined XML language definition for the description of object-oriented Petri Nets (OOPN is based on XMI (description of UML object-oriented models, SOX (simple description of general OO systems and PNML (an XML-based language used for the description of structured and modular PN. For OOPN, the XML form of description represents a standard format for storing as well as for transfer between various OOPN-processing (analysis, simulation, ... tools.

  10. An enhanced security solution for electronic medical records based on AES hybrid technique with SOAP/XML and SHA-1.

    Science.gov (United States)

    Kiah, M L Mat; Nabi, Mohamed S; Zaidan, B B; Zaidan, A A

    2013-10-01

    This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.

  11. Defining the XML schema matching problem for a personal schema based query answering system

    OpenAIRE

    Smiljanic, M.; van Keulen, Maurice; Jonker, Willem

    2004-01-01

    In this report, we analyze the problem of personal schema matching. We define the ingredients of the XML schema matching problem using constraint logic programming. This allows us to thourougly investigate specific matching problems. We do not have the ambition to provide for a formalism that covers all kinds of schema matching problems. The target is specifically personal schema matching using XML. The report is organized as follows. Chapter 2 provides a detailed description of our research ...

  12. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Science.gov (United States)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  13. NeXML: rich, extensible, and verifiable representation of comparative data and metadata.

    Science.gov (United States)

    Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-07-01

    In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.

  14. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Directory of Open Access Journals (Sweden)

    O. Naito

    2017-01-01

    Full Text Available A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  15. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    Science.gov (United States)

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  16. An Object-Oriented Approach of Keyword Querying over Fuzzy XML

    Directory of Open Access Journals (Sweden)

    Ting Li

    2016-09-01

    Full Text Available As the fuzzy data management has become one of the main research topics and directions, the question of how to obtain the useful information by means of keyword query from fuzzy XML documents is becoming a subject of an increasing needed investigation. Considering the keyword query methods on crisp XML documents, smallest lowest common ancestor (SLCA semantics is one of the most widely accepted semantics. When users propose the keyword query on fuzzy XML documents with the SLCA semantics, the query results are always incomplate, with low precision, and with no possibilities values returned. Most of keyword query semantics on XML documents only consider query results matching all keywords, yet users may also be interested in the query results matching partial keywords. To overcome these limitations, in this paper, we investigate how to obtain more comprehensive and meaningful results of keyword querying on fuzzy XML documents. We propose a semantics of object-oriented keyword querying on fuzzy XML documents. First, we introduce the concept of "object tree", analyze different types of matching result object trees and find the "minimum result object trees" which contain all keywords and "result object trees" which contain partial keywords. Then an object-oriented keyword query algorithm ROstack is proposed to obtain the root nodes of these matching result object trees, together with their possibilities. At last, experiments are conducted to verify the effectiveness and efficiency of our proposed algorithm.

  17. An XML based middleware for ECG format conversion.

    Science.gov (United States)

    Li, Xuchen; Vojisavljevic, Vuk; Fang, Qiang

    2009-01-01

    With the rapid development of information and communication technologies, various e-health solutions have been proposed. The digitized medical images as well as the mono-dimension medical signals are two major forms of medical information that are stored and manipulated within an electronic medical environment. Though a variety of industrial and international standards such as DICOM and HL7 have been proposed, many proprietary formats are still pervasively used by many Hospital Information System (HIS) and Picture Archiving and Communication System (PACS) vendors. Those proprietary formats are the big hurdle to form a nationwide or even worldwide e-health network. Thus there is an imperative need to solve the medical data integration problem. Moreover, many small clinics, many hospitals in developing countries and some regional hospitals in developed countries, which have limited budget, have been shunned from embracing the latest medical information technologies due to their high costs. In this paper, we propose an XML based middleware which acts as a translation engine to seamlessly integrate clinical ECG data from a variety of proprietary data formats. Furthermore, this ECG translation engine is designed in a way that it can be integrated into an existing PACS to provide a low cost medical information integration and storage solution.

  18. Using XML and Java for Astronomical Instrumentation Control

    Science.gov (United States)

    Ames, Troy; Koons, Lisa; Sall, Ken; Warsaw, Craig

    2000-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). ]ML is used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, and communication mechanisms. Although the current effort is targeted for the High-resolution Airborne Wideband Camera, a first-light instrument of the Stratospheric Observatory for Infrared Astronomy, the framework is designed to be generic and extensible so that it can be applied to any instrument.

  19. Applying GRID Technologies to XML Based OLAP Cube Construction

    CERN Document Server

    Niemi, Tapio Petteri; Nummenmaa, J; Thanisch, P

    2002-01-01

    On-Line Analytical Processing (OLAP) is a powerful method for analysing large data warehouse data. Typically, the data for an OLAP database is collected from a set of data repositories such as e.g. operational databases. This data set is often huge, and it may not be known in advance what data is required and when to perform the desired data analysis tasks. Sometimes it may happen that some parts of the data are only needed occasionally. Therefore, storing all data to the OLAP database and keeping this database constantly up-to-date is not only a highly demanding task but it also may be overkill in practice. This suggests that in some applications it would be more feasible to form the OLAP cubes only when they are actually needed. However, the OLAP cube construction can be a slow process. Thus, we present a system that applies Grid technologies to distribute the computation. As the data sources may well be heterogeneous, we propose an XML language for data collection. The user's definition for a OLAP new cube...

  20. Converting Biomolecular Modelling Data Based on an XML Representation

    Directory of Open Access Journals (Sweden)

    Sun Yudong

    2008-06-01

    Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  1. Transitioning from XML to RDF: Considerations for an effective move towards Linked Data and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Juliet L. Hardesty

    2016-04-01

    Full Text Available Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF. Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.

  2. jMRUI plugin software (jMRUI2XML) to allow automated MRS processing and XML-based standardized output

    Czech Academy of Sciences Publication Activity Database

    Mocioiu, V.; Ortega-Martorell, S.; Olier, I.; Jabłoński, Michal; Starčuková, Jana; Lisboa, P.; Arús, C.; Julia-Sapé, M.

    2015-01-01

    Roč. 28, S1 (2015), S518 ISSN 0968-5243. [ESMRMB 2015. Annual Scientific Meeting /32./. 01.09.2015-03.09.2015, Edinburgh] Institutional support: RVO:68081731 Keywords : MR Spectroscopy * signal processing * jMRUI * software development * XML Subject RIV: BH - Optics, Masers, Lasers

  3. Semantic validation of standard-based electronic health record documents with W3C XML schema.

    Science.gov (United States)

    Rinner, C; Janzek-Hawlat, S; Sibinovic, S; Duftschmid, G

    2010-01-01

    The goal of this article is to examine whether W3C XML Schema provides a practicable solution for the semantic validation of standard-based electronic health record (EHR) documents. With semantic validation we mean that the EHR documents are checked for conformance with the underlying archetypes and reference model. We describe an approach that allows XML Schemas to be derived from archetypes based on a specific naming convention. The archetype constraints are augmented with additional components of the reference model within the XML Schema representation. A copy of the EHR document that is transformed according to the before-mentioned naming convention is used for the actual validation against the XML Schema. We tested our approach by semantically validating EHR documents conformant to three different ISO/EN 13606 archetypes respective to three sections of the CDA implementation guide "Continuity of Care Document (CCD)" and an implementation guide for diabetes therapy data. We further developed a tool to automate the different steps of our semantic validation approach. For two particular kinds of archetype prescriptions, individual transformations are required for the corresponding EHR documents. Otherwise, a fully generic validation is possible. In general, we consider W3C XML Schema as a practicable solution for the semantic validation of standard-based EHR documents.

  4. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. The application of XML in the effluents data modeling of nuclear facilities

    International Nuclear Information System (INIS)

    Yue Feng; Lin Quanyi; Yue Huiguo; Zhang Yan; Zhang Peng; Cao Jun; Chen Bo

    2013-01-01

    The radioactive effluent data, which can provide information to distinguish whether facilities, waste disposal, and control system run normally, is an important basis of safety regulation and emergency management. It can also provide the information to start emergency alarm system as soon as possible. XML technology is an effective tool to realize the standard of effluent data exchange, in favor of data collection, statistics and analysis, strengthening the effectiveness of effluent regulation. This paper first introduces the concept of XML, the choices of effluent data modeling method, and then emphasizes the process of effluent model, finally the model and application are shown, While there is deficiency about the application of XML in the effluents data modeling of nuclear facilities, it is a beneficial attempt to the informatization management of effluents. (authors)

  6. ForConX: A forcefield conversion tool based on XML.

    Science.gov (United States)

    Lesch, Volker; Diddens, Diddo; Bernardes, Carlos E S; Golub, Benjamin; Dequidt, Alain; Zeindlhofer, Veronika; Sega, Marcello; Schröder, Christian

    2017-04-05

    The force field conversion from one MD program to another one is exhausting and error-prone. Although single conversion tools from one MD program to another exist not every combination and both directions of conversion are available for the favorite MD programs Amber, Charmm, Dl-Poly, Gromacs, and Lammps. We present here a general tool for the force field conversion on the basis of an XML document. The force field is converted to and from this XML structure facilitating the implementation of new MD programs for the conversion. Furthermore, the XML structure is human readable and can be manipulated before continuing the conversion. We report, as testcases, the conversions of topologies for acetonitrile, dimethylformamide, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate comprising also Urey-Bradley and Ryckaert-Bellemans potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. PRIDEViewer: a novel user-friendly interface to visualize PRIDE XML files.

    Science.gov (United States)

    Medina-Aunon, J Alberto; Carazo, José M; Albar, Juan Pablo

    2011-01-01

    Current standardization initiatives have greatly contributed to share the information derived by proteomics experiments. One of these initiatives is the XML-based repository PRIDE (PRoteomics IDEntification database), although an XML-based document does not appear to present a user-friendly view at the first glance. PRIDEViewer is a novel Java-based application that presents the information available in a PRIDE XML file in a user-friendly manner, facilitating the interaction among end users as well as the understanding and evaluation of the compiled information. PRIDEViewer is freely available at: http://proteo.cnb.csic.es/prideviewer/. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. An XML Based Knowledge Management System for e-Collaboration and e-Learning

    Directory of Open Access Journals (Sweden)

    Varun Gopalakrishna

    2004-02-01

    Full Text Available This paper presents the development, key features, and the implementation principles of a sustainable and scaleable knowledge management system (KMS prototype for creating, capturing, organizing, and managing digital information in the form of Extensible Markup Language (XML documents and other popular file formats. It is aimed to provide a platform for global, instant, and secure access to and dissemination of information within a knowledge-intensive organization or a cluster of organizations through Internet or intranet. A three-tier system architecture was chosen for the KMS to provide performance and scalability while enabling future development that supports global, secure, real-time, and multi-media communication of information and knowledge among team members separated by great distance. An XML Content Server has been employed in this work to store, index, and retrieve large volumes of XML and binary content.

  9. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    Science.gov (United States)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  10. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O., E-mail: naito.osamu@jaea.go.j [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka, Ibaraki 311-0193 (Japan)

    2010-07-15

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  11. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    International Nuclear Information System (INIS)

    Naito, O.

    2010-01-01

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  12. About Hierarchical XML Structures, Replacement of Relational Data Structures in Construction and Implementation of ERP Systems

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The projects essential objective is to develop a new ERP system, of homogeneous nature, based on XML structures, as a possible replacement for classic ERP systems. The criteria that guide the objective definition are modularity, portability and Web connectivity. This objective is connected to a series of secondary objectives, considering that the technological approach will be filtered through the economic, social and legislative environment for a validation-by-context study. Statistics and cybernetics are to be used for simulation purposes. The homogeneous approach is meant to provide strong modularity and portability, in relation with the n-tier principles, but the main advantage of the model is its opening to the semantic Web, based on a Small enterprise ontology defined with XML-driven languages. Shockwave solutions will be used for implementing client-oriented hypermedia elements and an XML Gate will be de-fined between black box modules, for a clear separation with obvious advantages. Security and the XMLTP project will be an important issue for XML transfers due to the conflict between the open architecture of the Web, the readability of XML data and the privacy elements which have to be preserved within a business environment. The projects finality is oriented on small business but the semantic Web perspective and the surprising new conflict between hierarchical/network data structures and relational ones will certainly widen its scope. The proposed model is meant to fulfill the IT compatibility requirements of the European environment, defined as a knowledge society. The paper is a brief of the contributions of the team re-search at the project type A applied to CNCSIS "Research on the Role of XML in Building Extensible and Homogeneous ERP Systems".

  13. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  14. A new XML-aware compression technique for improving performance of healthcare information systems over hospital networks.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Most organizations exchange, collect, store and process data over the Internet. Many hospital networks deploy Web services to send and receive patient information. SOAP (Simple Object Access Protocol) is the most usable communication protocol for Web services. XML is the standard encoding language of SOAP messages. However, the major drawback of XML messages is the high network traffic caused by large overheads. In this paper, two XML-aware compressors are suggested to compress patient messages stemming from any data transactions between Web clients and servers. The proposed compression techniques are based on the XML structure concepts and use both fixed-length and Huffman encoding methods for translating the XML message tree. Experiments show that they outperform all the conventional compression methods and can save tremendous amount of network bandwidth.

  15. Value of XML in the implementation of clinical practice guidelines--the issue of content retrieval and presentation.

    Science.gov (United States)

    Hoelzer, S; Schweiger, R K; Boettcher, H A; Tafazzoli, A G; Dudeck, J

    2001-01-01

    that preserves the original cohesiveness. The lack of structure limits the automatic identification and extraction of the information contained in these resources. For this reason, we have chosen a document-based approach using eXtensible Markup Language (XML) with its schema definition and related technologies. XML empowers the applications for in-context searching. In addition it allows the same content to be represented in different ways. Our XML reference clinical data model for guidelines has been realized with the XML schema definition. The schema is used for structuring new text-based guidelines and updating existing documents. It is also used to establish search strategies on the document base. We hypothesize that enabling the physicians to query the available CPGs easily, and to get access to selected and specific information at the point of care will foster increased use. Based on current evidence we are confident that it will have substantial impact on the care provided, and will improve health outcomes.

  16. XML Schema of PaGE-OM: page-om.xsd [

    Lifescience Database Archive (English)

    Full Text Available one or more variation assays (e.g. assay multiplexing Assay_set). Note: These are optional laboratory specif...fication is used for data exchange formats (e.g. xml-schema). Therefore, it has optional direct associations

  17. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    Science.gov (United States)

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  18. XML schemas for common bioinformatic data types and their application in workflow systems.

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  19. Association rule extraction from XML stream data for wireless sensor networks.

    Science.gov (United States)

    Paik, Juryon; Nam, Junghyun; Kim, Ung Mo; Won, Dongho

    2014-07-18

    With the advances of wireless sensor networks, they yield massive volumes of disparate, dynamic and geographically-distributed and heterogeneous data. The data mining community has attempted to extract knowledge from the huge amount of data that they generate. However, previous mining work in WSNs has focused on supporting simple relational data structures, like one table per network, while there is a need for more complex data structures. This deficiency motivates XML, which is the current de facto format for the data exchange and modeling of a wide variety of data sources over the web, to be used in WSNs in order to encourage the interchangeability of heterogeneous types of sensors and systems. However, mining XML data for WSNs has two challenging issues: one is the endless data flow; and the other is the complex tree structure. In this paper, we present several new definitions and techniques related to association rule mining over XML data streams in WSNs. To the best of our knowledge, this work provides the first approach to mining XML stream data that generates frequent tree items without any redundancy.

  20. Experience in Computer-Assisted XML-Based Modelling in the Context of Libraries

    CERN Document Server

    Niinimäki, M

    2003-01-01

    In this paper, we introduce a software called Meta Data Visualisation (MDV) that (i) assists the user with a graphical user interface in the creation of his specific document types, (ii) creates a database according to these document types, (iii) allows the user to browse the database, and (iv) uses native XML presentation of the data in order to allow queries or data to be exported to other XML-based systems. We illustrate the use of MDV and XML modelling using library-related examples to build a bibliographic database. In our opinion, creating document type descriptions corresponds to conceptual and logical database design in a database design process. We consider that this design can be supported with a suitable set of tools that help the designer concentrate on conceptual issues instead of implementation issues. Our hypothesis is that using the methodology presented in this paper we can create XML databases that are useful and relevant, and with which MDV works as a user interface.

  1. Report on the first Twente Data Management Workshop on XML Databases and Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Mihajlovic, V.

    2004-01-01

    The Database Group of the University of Twente initiated a new series of workshops called Twente Data Management workshops (TDM), starting with one on XML Databases and Information Retrieval which took place on 21 June 2004 at the University of Twente. We have set ourselves two goals for the

  2. XSAMS: XML Schema for Atoms, Molecules and Solids. Summary report of an IAEA Consultants' Meeting

    Energy Technology Data Exchange (ETDEWEB)

    Braams, B J [International Atomic Energy Agency, Vienna (Austria)

    2010-05-15

    Experts on atomic and molecular data and their exchange met at National Institute for Fusion Science, Toki-City, Japan, to review progress in the implementation of XSAMS, the XML Schema for Atoms, Molecules and Solids, and to discuss further development of the Schema. The proceedings of the meeting are summarized here. (author)

  3. XSAMS: XML Schema for Atoms, Molecules and Solids. Summary report of an IAEA Consultants' Meeting

    International Nuclear Information System (INIS)

    Braams, B.J.

    2010-05-01

    Experts on atomic and molecular data and their exchange met at National Institute for Fusion Science, Toki-City, Japan, to review progress in the implementation of XSAMS, the XML Schema for Atoms, Molecules and Solids, and to discuss further development of the Schema. The proceedings of the meeting are summarized here. (author)

  4. A Novel Approach for Configuring The Stimulator of A BCI Framework Using XML

    Directory of Open Access Journals (Sweden)

    Indar Sugiarto

    2009-08-01

    Full Text Available In a working BCI framework, all aspects must be considered as an integral part that contributes to the successful operation of a BCI system. This also includes the development of robust but flexible stimulator, especially the one that closely related to the feedback of a BCI system. This paper describes a novel approach in providing flexible visual stimulator using XML which has been applied for a BCI (brain-computer interface framework. Using XML file format for configuring the visual stimulator of a BCI system, we can develop BCI applications which can accommodate many experiment strategies in BCI research. The BCI framework and its configuration platform is developed using C++ programming language which incorporate Qt’s most powerful XML parser named QXmlStream. The implementation and experiment shows that the XML configuration file can be well executed within the proposed BCI framework. Beside its capability in presenting flexible flickering frequencies and text formatting for SSVEP-based BCI, the configuration platform also provides 3 shapes, 16 colors, and 5 distinct feedback bars. It is not necessary to increase the number of shapes nor colors since those parameters are less important for the BCI stimulator. The proposed method can then be extended to enhance the usability of currently existed BCI framework such as BF++ Toys and BCI 2000.

  5. Defining the XML schema matching problem for a personal schema based query answering system

    NARCIS (Netherlands)

    Smiljanic, M.; van Keulen, Maurice; Jonker, Willem

    In this report, we analyze the problem of personal schema matching. We define the ingredients of the XML schema matching problem using constraint logic programming. This allows us to thourougly investigate specific matching problems. We do not have the ambition to provide for a formalism that covers

  6. XML como medio de normalización y desarrollo documental.

    Directory of Open Access Journals (Sweden)

    de la Rosa, Antonio

    1999-12-01

    Full Text Available The Web, as a working environment for information science professionals, demands the exploitation of new tools. These tools are intended to allow the information management in a structured and organised way. XML and its specifications offer a wide range of solutions for the problems of our domain: either for the development of documentary software or the day-to-day tasks. In this article, the XML standard is briefly presented and its possible impact in the profession is evaluated as well as the possibilities to use it as vehicle for the creation of information systems.

    El Web, como entorno de trabajo para los profesionales de la documentación, requiere la utilización de nuevas herramientas que permitan gestionar la información de forma estructurada y organizada. XML y las especificaciones que se derivan de él ofrecen una amplia gama de soluciones a los diversos problemas que atañen a nuestra disciplina, tanto para el desarrollo de software documental como para las tareas cotidianas. En este artículo se presenta brevemente la norma XML y se evalúa su posible impacto en la profesión así como las posibilidades de utilizarlo como vehículo para la creación de sistemas de información.

  7. FireCalc: An XML-based framework for distributed data analysis

    International Nuclear Information System (INIS)

    Duarte, A.S.; Santos, J.H.; Fernandes, H.; Neto, A.; Pereira, T.; Varandas, C.A.F.

    2008-01-01

    Requirements and specifications for Control Data Access and Communication (CODAC) systems in fusion reactors point towards flexible and modular solutions, independent from operating system and computer architecture. These concepts can also be applied to calculation and data analysis systems, where highly standardized solutions must also apply in order to anticipate long time-scales and high technology evolution changes. FireCalc is an analysis tool based on standard Extensible Markup Language (XML) technologies. Actions are described in an XML file, which contains necessary data specifications and the code or references to scripts. This is used by the user to send the analysis code and data to a server, which can be running either locally or remotely. Communications between the user and the server are performed through XML-RPC, an XML based remote procedure call, thus enabling the client and server to be coded in different computer languages. Access to the database, security procedures and calls to the code interpreter are handled through independent modules, which unbinds them from specific solutions. Currently there is an implementation of the FireCalc framework in Java, that uses the Shared Data Access System (SDAS) for accessing the ISTTOK database and the Scilab kernel for the numerical analysis

  8. Integration of HTML documents into an XML-based knowledge repository.

    Science.gov (United States)

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  9. UPX: a new XML representation for annotated datasets of online handwriting data

    NARCIS (Netherlands)

    Agrawal, M.; Bali, K.; Madhvanath, S.; Vuurpijl, L.G.

    2005-01-01

    This paper introduces our efforts to create UPX, an XML-based successor to the venerable UNIPEN format for the representation of annotated datasets of online handwriting data. In the first part of the paper, shortcomins of the UNIPEN format are dicussed and the goals of UPX are outlined. Prior work

  10. A Self-adaptive Scope Allocation Scheme for Labeling Dynamic XML Documents

    NARCIS (Netherlands)

    Shen, Y.; Feng, L.; Shen, T.; Wang, B.

    This paper proposes a self-adaptive scope allocation scheme for labeling dynamic XML documents. It is general, light-weight and can be built upon existing data retrieval mechanisms. Bayesian inference is used to compute the actual scope allocated for labeling a certain node based on both the prior

  11. Von der XML-Datenbasis zur nutzergerecht strukturierten Web-Site

    NARCIS (Netherlands)

    Freitag, D.; Wombacher, Andreas

    2002-01-01

    Due to the increasing use of information an the WWW by different sorts of device types, content providers have to solve the problem, how to present the fitting contents both effectively and taking into account the needs of the device type. The XML-language-family offers the possibility to present

  12. Modeling views in the layered view model for XML using UML

    NARCIS (Netherlands)

    Rajugan, R.; Dillon, T.S.; Chang, E.; Feng, L.

    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for

  13. Association Rule Extraction from XML Stream Data for Wireless Sensor Networks

    Science.gov (United States)

    Paik, Juryon; Nam, Junghyun; Kim, Ung Mo; Won, Dongho

    2014-01-01

    With the advances of wireless sensor networks, they yield massive volumes of disparate, dynamic and geographically-distributed and heterogeneous data. The data mining community has attempted to extract knowledge from the huge amount of data that they generate. However, previous mining work in WSNs has focused on supporting simple relational data structures, like one table per network, while there is a need for more complex data structures. This deficiency motivates XML, which is the current de facto format for the data exchange and modeling of a wide variety of data sources over the web, to be used in WSNs in order to encourage the interchangeability of heterogeneous types of sensors and systems. However, mining XML data for WSNs has two challenging issues: one is the endless data flow; and the other is the complex tree structure. In this paper, we present several new definitions and techniques related to association rule mining over XML data streams in WSNs. To the best of our knowledge, this work provides the first approach to mining XML stream data that generates frequent tree items without any redundancy. PMID:25046017

  14. Using Web Services and XML Harvesting to Achieve a Dynamic Web Site. Computers in Small Libraries

    Science.gov (United States)

    Roberts, Gary

    2005-01-01

    Exploiting and contextualizing free information is a natural part of library culture. In this column, Gary Roberts, the information systems and reference librarian at Herrick Library, Alfred University in Alfred, NY, describes how to use XML content on a Web site to link to hundreds of free and useful resources. He gives a general overview of the…

  15. XML schemas for common bioinformatic data types and their application in workflow systems

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  16. Applying XML-Based Technologies to Developing Online Courses: The Case of a Prototype Learning Environment

    Science.gov (United States)

    Jedrzejowicz, Joanna; Neumann, Jakub

    2007-01-01

    Purpose: This paper seeks to describe XML technologies and to show how they can be applied for developing web-based courses and supporting authors who do not have much experience with the preparation of web-based courses. Design/methodology/approach: When developing online courses the academic staff has to address the following problem--how to…

  17. XML and its impact on content and structure in electronic health care documents.

    Science.gov (United States)

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  18. Automating data acquisition into ontologies from pharmacogenetics relational data sources using declarative object definitions and XML.

    Science.gov (United States)

    Rubin, Daniel L; Hewett, Micheal; Oliver, Diane E; Klein, Teri E; Altman, Russ B

    2002-01-01

    Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.

  19. FireCalc: An XML-based framework for distributed data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, A.S. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais P-1049-001 Lisboa (Portugal)], E-mail: andre.duarte@cfn.ist.utl.pt; Santos, J.H.; Fernandes, H.; Neto, A.; Pereira, T.; Varandas, C.A.F. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais P-1049-001 Lisboa (Portugal)

    2008-04-15

    Requirements and specifications for Control Data Access and Communication (CODAC) systems in fusion reactors point towards flexible and modular solutions, independent from operating system and computer architecture. These concepts can also be applied to calculation and data analysis systems, where highly standardized solutions must also apply in order to anticipate long time-scales and high technology evolution changes. FireCalc is an analysis tool based on standard Extensible Markup Language (XML) technologies. Actions are described in an XML file, which contains necessary data specifications and the code or references to scripts. This is used by the user to send the analysis code and data to a server, which can be running either locally or remotely. Communications between the user and the server are performed through XML-RPC, an XML based remote procedure call, thus enabling the client and server to be coded in different computer languages. Access to the database, security procedures and calls to the code interpreter are handled through independent modules, which unbinds them from specific solutions. Currently there is an implementation of the FireCalc framework in Java, that uses the Shared Data Access System (SDAS) for accessing the ISTTOK database and the Scilab kernel for the numerical analysis.

  20. CREATING OPEN DIGITAL LIBRARY USING XML: IMPLEMENTATION OF OAI-PMH

    OpenAIRE

    M. Vesely; T. Baron; J.Y. Le Meur; T. Simko

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  1. Creating Open Digital Library Using XML Implementation of OAi-PMH Protocol at CERN

    CERN Document Server

    Vesely, M; Le Meur, Jean-Yves; Simko, Tibor

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  2. Creating Open Digital Library Using XML: Implementation of OAi-PMH Protocol at CERN

    OpenAIRE

    Vesely, M; Baron, T; Le Meur, Jean-Yves; Simko, Tibor

    2002-01-01

    This article describes the implementation of the OAi-PMH protocol within the CERN Document Server (CDS). In terms of the protocol, CERN acts both as a data provider and service provider and the two core applications are described. The application of XML Schema and XSLT technology is emphasized.

  3. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    Science.gov (United States)

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  4. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    Directory of Open Access Journals (Sweden)

    Hannes L Röst

    Full Text Available In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size.Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11, making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data.Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  5. XML-Based Generator of C++ Code for Integration With GUIs

    Science.gov (United States)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  6. IEEE 1451.1 Standard and XML Web Services: a Powerful Combination to Build Distributed Measurement and Control Systems

    OpenAIRE

    Viegas, Vítor; Pereira, José Dias; Girão, P. Silva

    2006-01-01

    In 2005, we presented the NCAP/XML, a prototype of NCAP (Network Capable Application Processor) that runs under the .NET Framework and makes available its functionality through a set of Web Services using XML (eXtended Markup Language). Giving continuity to this project, it is time to explain how to use the NCAP/XML to build a Distributed Measurement and Control System (DMCS) compliant with the 1451.1 Std. This paper is divided in two main parts: in the first part, we present the new software...

  7. An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters

    National Research Council Canada - National Science Library

    Stewart, Jeffrey

    2003-01-01

    .... The system uses XML technologies in server/client and stand alone environments. With a web browser, the user views and navigates the system's content from a downloaded file collection or from a centralized data source via a network connection...

  8. Evaluation of efficient XML interchange (EXI) for large datasets and as an alternative to binary JSON encodings

    OpenAIRE

    Hill, Bruce W.

    2015-01-01

    Approved for public release; distribution is unlimited Current and emerging Navy information concepts, including network-centric warfare and Navy Tactical Cloud, presume high network throughput and interoperability. The Extensible Markup Language (XML) addresses the latter requirement, but its verbosity is problematic for afloat networks. JavaScript Object Notation (JSON) is an alternative to XML common in web applications and some non-relational databases. Compact, binary encodings exist ...

  9. A Short Story about XML Schemas, Digital Preservation and Format Libraries

    Directory of Open Access Journals (Sweden)

    Steve Knight

    2012-03-01

    Full Text Available One morning we came in to work to find that one of our servers had made 1.5 million attempts to contact an external server in the preceding hour. It turned out that the calls were being generated by the Library’s digital preservation system (Rosetta while attempting to validate XML Schema Definition (XSD declarations included in the XML files of the Library’s online newspaper application Papers Past, which we were in the process of loading into Rosetta. This paper describes our response to this situation and outlines some of the issues that needed to be canvassed before we were able to arrive at a suitable solution, including the digital preservation status of these XSDs; their impact on validation tools, such as JHOVE; and where these objects should reside if they are considered material to the digital preservation process.

  10. XML schema for atomic and molecular data. Summary report of consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.

    2008-04-01

    Advanced developments in computer technologies offer exciting opportunities for new distribution tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the A+M Data Unit initiated within the collaborative efforts of the DCN (Data Centre Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM'PSI) based on XML (eXtensible Markup Language). A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon meets biannually to discuss progress made on the XML schema, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. (author)

  11. Summary report of consultants' meeting on XML schema for atomic and molecular data

    International Nuclear Information System (INIS)

    Humbert, D.

    2007-07-01

    Advanced developments in computer technologies offer exciting opportunities for new distributed tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the AMD Unit initiated within the collaborative efforts of the DCN (Data Centre Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM/PSI) based on XML (eXtensible Markup Language). A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon, meets biannually to discuss progress made on the XML schema and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. This meeting is the first such gathering of these specialists in 2007. (author)

  12. The Knowledge Sharing Based on PLIB Ontology and XML for Collaborative Product Commerce

    Science.gov (United States)

    Ma, Jun; Luo, Guofu; Li, Hao; Xiao, Yanqiu

    Collaborative Product Commerce (CPC) has become a brand-new commerce mode for manufacture. In order to promote information communication with each other more efficiently in CPC, a knowledge-sharing framework based on PLIB (ISO 13584) ontology and XML was presented, and its implementation method was studied. At first, according to the methodology of PLIB (ISO 13584), a common ontology—PLIB ontology was put forward which provide a coherent conceptual meaning within the context of CPC domain. Meanwhile, for the sake of knowledge intercommunion via internet, the PLIB ontology formalization description by EXPRESS mode was converted into XML Schema, and two mapping methods were presented: direct mapping approach and meta-levels mapping approach, while the latter was adopted. Based on above work, a parts resource knowledge-sharing framework (CPC-KSF) was put forward and realized, which has been applied in the process of automotive component manufacturing collaborative product commerce.

  13. Enhancement of the Work in Scia Engineer's Environment by Employment of XML Programming Language

    Directory of Open Access Journals (Sweden)

    Kortiš Ján

    2015-12-01

    Full Text Available The productivity of the work of engineers in the design of building structures by applying the rules of technical standards [1] has been increasing by using different software products for recent years. The software products offer engineers new possibilities to design different structures. However, there are problems especially for design of structures with similar static schemes as it is needed to follow the same work-steps. This can be more effective if the steps are done automatically by using a programming language for leading the processes that are done by software. The design process of timber structure which is done in the environment of Scia Engineer software is presented in the article. XML Programming Language is used for automatization of the design and the XML code is modified in the Excel environment by using VBA Programming language [2], [3].

  14. XML-based assembly visualization for a multi-CAD digital mock-up system

    International Nuclear Information System (INIS)

    Song, In Ho; Chung, Sung Chong

    2007-01-01

    Using a virtual assembly tool, engineers are able to design accurate and interference free parts without making physical mock-ups. Instead of a single CAD source, several CAD systems are used to design a complex product in a distributed design environment. In this paper, a multi-CAD assembly method is proposed through an XML and the lightweight CAD file. XML data contains a hierarchy of the multi-CAD assembly. The lightweight CAD file produced from various CAD files through the ACIS kemel and InterOp includes not only mesh and B-Rep data, but also topological data. It is used to visualize CAD data and to verify dimensions of the parts. The developed system is executed on desktop computers. It does not require commercial CAD systems to visualize 3D assembly data. Multi-CAD models have been assembled to verify the effectiveness of the developed DMU system on the Internet

  15. Using XML Configuration-Driven Development to Create a Customizable Ground Data System

    Science.gov (United States)

    Nash, Brent; DeMore, Martha

    2009-01-01

    The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.

  16. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    Science.gov (United States)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  17. QuakeML: status of the XML-based seismological data exchange format

    OpenAIRE

    Joachim Saul; Philipp Kästli; Fabian Euchner; Danijel Schorlemmer

    2011-01-01

    QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2) is based on a public Request for Comments process and accounts for suggestions and comments...

  18. Comparing FrameMaker and Quicksilver as Tools for Producing Single Sourced Content from XML

    OpenAIRE

    HUHTAMÄKI, HENRI

    2006-01-01

    Tutkimuksen tarkoituksena on vertailla kahta yleisesti teknisen dokumentaation tuottamiseen tarkoittettua ohjelmaa yksilähteistämisen näkökulmasta, kun lähdemateriaali on XML-muodossa: Adobe FrameMakeria ja Broadvision Quicksilveriä. Tarkoituksena on antaa teknisille kirjoittajille ja tekniseen viestintään erikoistuneille yrityksille tarpeeksi tietoa, jotta he osaisivat valita oikean työkalun omiin tarkoitusperiinsä. Työkalut testataan tutkimuskentän rajoittamiseksi sellaisina kokonaisuuksina...

  19. XTCE and XML Database Evolution and Lessons from JWST, LandSat, and Constellation

    Science.gov (United States)

    Gal-Edd, Jonathan; Kreistle, Steven; Fatig. Cirtos; Jones, Ronald

    2008-01-01

    The database organizations within three different NASA projects have advanced current practices by creating database synergy between the various spacecraft life cycle stakeholders and educating users in the benefits of the Consultative Committee for Space Data Systems (CCSDS) XML Telemetry and Command Exchange (XTCE) format. The combination of XML for managing program data and CCSDS XTCE for exchange is a robust approach that will meet all user requirements using Standards and Non proprietary tools. COTS tools for XTCEKML are very wide and varied. To combine together various low cost and free tools can be more expensive in the long run than choosing a more expensive COTS tool that meets all the needs. This was especially important when deploying in 32 remote sites with no need for licenses. A common mission XTCEKML format between dissimilar systems is possible and is not difficult. Command XMLKTCE is more complex than telemetry and the use of XTCEKML metadata to describe pages and scripts is needed due to the proprietary nature of most current ground systems. Other mission and science products such as spacecraft loads, science image catalogs, and mission operation procedures can all be described with XML as well to increase there flexibility as systems evolve and change. Figure 10 is an example of a spacecraft table load. The word is out and the XTCE community is growing, The f sXt TCE user group was held in October and in addition to ESAESOC, SC02000, and CNES identified several systems based on XTCE. The second XTCE user group is scheduled for March 10, 2008 with LDMC and others joining. As the experience with XTCE grows and the user community receives the promised benefits of using XTCE and XML the interest is growing fast.

  20. An XML Approach of Coding a Morphological Database for Arabic Language

    OpenAIRE

    Gridach, Mourad; Chenfour, Noureddine

    2011-01-01

    We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA). Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP). For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a...

  1. Comparing Emerging XML Based Formats from a Multi-discipline Perspective

    Science.gov (United States)

    Sawyer, D. M.; Reich, L. I.; Nikhinson, S.

    2002-12-01

    This paper analyzes the similarity and differences among several examples of an emerging generation of Scientific Data Formats that are based on XML technologies. Some of the factors evaluated include the goals of these efforts, the data models, and XML technologies used, and the maturity of currently available software. This paper then investigates the practicality of developing a single set of structural data objects and basic scientific concepts, such as units, that could be used across discipline boundaries and extended by disciplines and missions to create Scientific Data Formats for their communities. This analysis is partly based on an effort sponsored by the ESDIS office at GSFC to compare the Earth Science Markup Language (ESML) and the eXtensible Data Format( XDF), two members of this new generation of XML based Data Description Languages that have been developed by NASA funded efforts in recent years. This paper adds FITSML and potentially CDFML to the list of XML based Scientific Data Formats discussed. This paper draws heavily a Formats Evolution Process Committee (http://ssdoo.gsfc.nasa.gov/nost/fep/) draft white paper primarily developed by Lou Reich, Mike Folk and Don Sawyer to assist the Space Science community in understanding Scientific Data Formats. One of primary conclusions of that paper is that a scientific data format object model should be examined along two basic axes. The first is the complexity of the computer/mathematical data types supported and the second is the level of scientific domain specialization incorporated. This paper also discusses several of the issues that affect the decision on whether to implement a discipline or project specific Scientific Data Format as a formal extension of a general purpose Scientific Data Format or to implement the APIs independently.

  2. An XML-based system for synthesis of data from disparate databases.

    Science.gov (United States)

    Kurc, Tahsin; Janies, Daniel A; Johnson, Andrew D; Langella, Stephen; Oster, Scott; Hastings, Shannon; Habib, Farhat; Camerlengo, Terry; Ervin, David; Catalyurek, Umit V; Saltz, Joel H

    2006-01-01

    Diverse data sets have become key building blocks of translational biomedical research. Data types captured and referenced by sophisticated research studies include high throughput genomic and proteomic data, laboratory data, data from imagery, and outcome data. In this paper, the authors present the application of an XML-based data management system to support integration of data from disparate data sources and large data sets. This system facilitates management of XML schemas and on-demand creation and management of XML databases that conform to these schemas. They illustrate the use of this system in an application for genotype-phenotype correlation analyses. This application implements a method of phenotype-genotype correlation based on phylogenetic optimization of large data sets of mouse SNPs and phenotypic data. The application workflow requires the management and integration of genomic information and phenotypic data from external data repositories and from the results of phenotype-genotype correlation analyses. Our implementation supports the process of carrying out a complex workflow that includes large-scale phylogenetic tree optimizations and application of Maddison's concentrated changes test to large phylogenetic tree data sets. The data management system also allows collaborators to share data in a uniform way and supports complex queries that target data sets.

  3. XML for nuclear instrument control and monitoring: an approach towards standardisation

    International Nuclear Information System (INIS)

    Bharade, S.K.; Ananthakrishnan, T.S.; Kataria, S.K.; Singh, S.K.

    2004-01-01

    Communication among heterogeneous system with applications running under different operating systems and applications developed under different platforms has undergone rapid changes due to the adoption of XML standards. These are being developed for different industries like Chemical, Medical, Commercial etc. The High Energy Physics community has already a standard for exchange of data among different applications , under heterogeneous distributed systems like the CMS Data Acquisition System. There are a large number of Nuclear Instruments supplied by different manufactures which are increasingly getting connected. This approach is getting wider acceptance in instruments at reactor sites, accelerator sites and complex nuclear experiments -especially at centres like CERN. In order for these instruments to be able to describe the data which is available from them in a platform independent manner XML approach has been developed. This paper is the first attempt at Electronics Division for proposing an XML standard for control, monitoring, Data Acquisition and Analysis generated by Nuclear Instruments at Accelerator sites, Nuclear Reactor plant and Laboratory. The gamut of Nuclear Instruments include Multichannel Analysers, Health Physics Instruments, Accelerator Control Systems, Reactor Regulating systems, Flux mapping Systems etc. (author)

  4. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application

    Science.gov (United States)

    Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-01-01

    Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894

  5. WaterML: an XML Language for Communicating Water Observations Data

    Science.gov (United States)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  6. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  7. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  8. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    Science.gov (United States)

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  9. SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode

    International Nuclear Information System (INIS)

    Yan, Y; Mao, W; Jiang, S

    2014-01-01

    Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor. In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials

  10. An effective XML based name mapping mechanism within StoRM

    International Nuclear Information System (INIS)

    Corso, E; Forti, A; Ghiselli, A; Magnoni, L; Zappi, R

    2008-01-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation

  11. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application.

    Science.gov (United States)

    Peissig, Peggy; Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-09-13

    The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. ©Peggy Peissig, Kelsey M Schwei, Christopher Kadolph, Joseph Finamore, Efrain Cancel, Catherine A McCarty, Asha Okorie, Kate L Thomas, Jennifer Allen Pacheco, Jyotishman Pathak, Stephen B Ellis, Joshua C Denny, Luke V Rasmussen, Gerard Tromp, Marc S Williams, Tamara R Vrabec, Murray H Brilliant. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.09.2017.

  12. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  13. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    Science.gov (United States)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  14. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    Energy Technology Data Exchange (ETDEWEB)

    Arcidiacono, R. [Massachusetts Institute of Technology, Cambridge, MA (United States); Brigljevic, V. [CERN, Geneva (Switzerland); Rudjer Boskovic Institute, Zagreb (Croatia); Bruno, G. [CERN, Geneva (Switzerland); Cano, E. [CERN, Geneva (Switzerland); Cittolin, S. [CERN, Geneva (Switzerland); Erhan, S. [University of California, Los Angeles, Los Angeles, CA (United States); Gigi, D. [CERN, Geneva (Switzerland); Glege, F. [CERN, Geneva (Switzerland); Gomez-Reino, R. [CERN, Geneva (Switzerland); Gulmini, M. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); CERN, Geneva (Switzerland); Gutleber, J. [CERN, Geneva (Switzerland); Jacobs, C. [CERN, Geneva (Switzerland); Kreuzer, P. [University of Athens, Athens (Greece); Lo Presti, G. [CERN, Geneva (Switzerland); Magrans, I. [CERN, Geneva (Switzerland) and Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain)]. E-mail: ildefons.magrans@cern.ch; Marinelli, N. [Institute of Accelerating Systems and Applications, Athens (Greece); Maron, G. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); Meijers, F. [CERN, Geneva (Switzerland); Meschi, E. [CERN, Geneva (Switzerland); Murray, S. [CERN, Geneva (Switzerland); Nafria, M. [Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain); Oh, A. [CERN, Geneva (Switzerland); Orsini, L. [CERN, Geneva (Switzerland); Pieri, M. [University of California, San Diago, San Diago, CA (United States); Pollet, L. [CERN, Geneva (Switzerland); Racz, A. [CERN, Geneva (Switzerland); Rosinsky, P. [CERN, Geneva (Switzerland); Schwick, C. [CERN, Geneva (Switzerland); Sphicas, P. [University of Athens, Athens (Greece); CERN, Geneva (Switzerland); Varela, J. [LIP, Lisbon (Portugal); CERN, Geneva (Switzerland)

    2005-07-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.

  15. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    International Nuclear Information System (INIS)

    Arcidiacono, R.; Brigljevic, V.; Bruno, G.; Cano, E.; Cittolin, S.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Gulmini, M.; Gutleber, J.; Jacobs, C.; Kreuzer, P.; Lo Presti, G.; Magrans, I.; Marinelli, N.; Maron, G.; Meijers, F.; Meschi, E.; Murray, S.; Nafria, M.; Oh, A.; Orsini, L.; Pieri, M.; Pollet, L.; Racz, A.; Rosinsky, P.; Schwick, C.; Sphicas, P.; Varela, J.

    2005-01-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface

  16. E-Learning – Using XML technologies to meet the special characteristics of higher education

    Directory of Open Access Journals (Sweden)

    Igor Kanovsky

    2004-02-01

    Full Text Available In this paper we claim that the current approach to learning objects and metadata standards is counter productive for the integration of e-learning in higher education. We explain why higher education is different with regard to E-learning and we suggest an approach that avoids the use of global standards and favors an approach of an evolving set of metadata tags for an evolving community of practice. We demonstrate how XML technologies and some minimal technical help for the participating teachers can provide the required foundation for a productive process of integrating E-learning in higher education.

  17. XML Schema for Atoms, Molecules and Solids (XSAMS). Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Braams, B.J.

    2011-12-01

    A Consultants' Meeting on 'XML Schema for Atoms, Molecules and Solids (XSAMS)' was held at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD, United States of America, 3-5 October 2011. Objectives of the meeting were to review and discuss developments of the Schema made during 2011 in connection with implementations on databases associated with the Virtual Atomic and Molecular Data Centre (VAMDC) and to agree on the adoption of an international standard XSAMS version 1.0. The proceedings of the meeting are summarized here. (author)

  18. An XML Approach of Coding a Morphological Database for Arabic Language

    Directory of Open Access Journals (Sweden)

    Mourad Gridach

    2011-01-01

    Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.

  19. ReDaX (Relational to XML data publishing) un framework liviano para publicar información relacional

    OpenAIRE

    Ormeño, Emilio G.; Berón, Fabián R.

    2003-01-01

    Quizás uno de los mayores inconvenientes que posee XML, es que no ha sido pensado para almacenar información, en vez de ello, ha sido diseñado para permitir la publicación y el intercambio de información a través de la especificación XSL (eXtensible Stylesheet Languaje). Sin embargo, la mayor parte de la información de una empresa se encuentra en bases de datos relacionales. La publicación de información vía XML, es el proceso de transformar la información relacional en un documento XML para ...

  20. Developing and Deploying an XML-based Learning Content Management System at the FernUniversität Hagen

    Directory of Open Access Journals (Sweden)

    Gerd Steinkamp

    2005-02-01

    Full Text Available This paper is a report about the FuXML project carried out at the FernUniversität Hagen. FuXML is a Learning Content Management System (LCMS aimed at providing a practical and efficient solution for the issues attributed to authoring, maintenance, production and distribution of online and offline distance learning material. The paper presents the environment for which the system was conceived and describes the technical realisation. We discuss the reasons for specific implementation decisions and also address the integration of the system within the organisational and technical infrastructure of the university.

  1. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    Science.gov (United States)

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  2. An XML-based interchange format for genotype-phenotype data.

    Science.gov (United States)

    Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B

    2008-02-01

    Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.

  3. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs).

    Science.gov (United States)

    Wei, Tiandi; Gong, Jing; Jamitzky, Ferdinand; Heckl, Wolfgang M; Stark, Robert W; Rössle, Shaila C

    2008-11-05

    Leucine-rich repeats (LRRs) are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. We developed LRRML, a conformational database and an extensible markup language (XML) description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3) by multiple templates homology modeling and compared the result with the crystal structure. LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  4. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    Science.gov (United States)

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  5. THE POSSIBILITIES FOR THE CREATION OF A LANGUAGE XML FOR THE FORMALIZATION OF THE ACCOUNTING RECORDS

    Directory of Open Access Journals (Sweden)

    Aurora Popescu

    2008-12-01

    Full Text Available During the nineties the main trend in the development of the applications was the supply of support and accessibility for the computers connected on the internet to a wide range of informational resources (data basis, applications. A witness in this are the numerous languages and technologies which permit an easy development of the applications for the processing of data bases with a simple web browser as, for example, the script languages ASP, PHP, JSP etc. Many changes took place in the last years regarding the informational needs or the equipments used by different users. So, today not only the computers are connected on the internet, but also a wide range of equipments as mobile phones and many home utility devices. As a result of these needs, it became an imperative necessity the conception of an universal language that be understood by all these diverse equipments. XML is the answer to this requirement, this language representing a new step in the development of the informational epoch. XML appeared as a consequence of the limits of the HTML (the language of the web pages, this last one being incapable to use data for other applications.

  6. MASCOT HTML and XML parser: an implementation of a novel object model for protein identification data.

    Science.gov (United States)

    Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L

    2006-11-01

    Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.

  7. Design and implementation of an XML based object-oriented detector description database for CMS

    International Nuclear Information System (INIS)

    Liendl, M.

    2003-04-01

    This thesis deals with the development of a detector description database (DDD) for the compact muon solenoid (CMS) experiment at the large hadron collider (LHC) located at the European organization for nuclear research (CERN). DDD is a fundamental part of the CMS offline software with its main applications, simulation and reconstruction. Both are in need of different models of the detector in order to efficiently solve their specific tasks. In the thesis the requirements to a detector description database are analyzed and the chosen solution is described in detail. It comprises the following components: an XML based detector description language, a runtime system that implements an object-oriented transient representation of the detector, and an application programming interface to be used by client applications. One of the main aspects of the development is the design of the DDD components. The starting point is a domain model capturing concisely the characteristics of the problem domain. The domain model is transformed into several implementation models according to the guidelines of the model driven architecture (MDA). Implementation models and appropriate refinements thereof are foundation for adequate implementations. Using the MDA approach, a fully functional prototype was realized in C++ and XML. The prototype was successfully tested through seamless integration into both the simulation and the reconstruction framework of CMS. (author)

  8. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs

    Directory of Open Access Journals (Sweden)

    Stark Robert W

    2008-11-01

    Full Text Available Abstract Background Leucine-rich repeats (LRRs are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. Description We developed LRRML, a conformational database and an extensible markup language (XML description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3 by multiple templates homology modeling and compared the result with the crystal structure. Conclusion LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  9. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  10. Distribution of immunodeficiency fact files with XML – from Web to WAP

    Directory of Open Access Journals (Sweden)

    Riikonen Pentti

    2005-06-01

    Full Text Available Abstract Background Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Methods Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML, that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. Results IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. Conclusion The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.

  11. Domain Modeling and Application Development of an Archetype- and XML-based EHRS. Practical Experiences and Lessons Learnt.

    Science.gov (United States)

    Kropf, Stefan; Chalopin, Claire; Lindner, Dirk; Denecke, Kerstin

    2017-06-28

    Access to patient data within the hospital or between hospitals is still problematic since a variety of information systems is in use applying different vendor specific terminologies and underlying knowledge models. Beyond, the development of electronic health record systems (EHRSs) is time and resource consuming. Thus, there is a substantial need for a development strategy of standardized EHRSs. We are applying a reuse-oriented process model and demonstrate its feasibility and realization on a practical medical use case, which is an EHRS holding all relevant data arising in the context of treatment of tumors of the sella region. In this paper, we describe the development process and our practical experiences. Requirements towards the development of the EHRS were collected by interviews with a neurosurgeon and patient data analysis. For modelling of patient data, we selected openEHR as standard and exploited the software tools provided by the openEHR foundation. The patient information model forms the core of the development process, which comprises the EHR generation and the implementation of an EHRS architecture. Moreover, a reuse-oriented process model from the business domain was adapted to the development of the EHRS. The reuse-oriented process model is a model for a suitable abstraction of both, modeling and development of an EHR centralized EHRS. The information modeling process resulted in 18 archetypes that were aggregated in a template and built the boilerplate of the model driven development. The EHRs and the EHRS were developed by openEHR and W3C standards, tightly supported by well-established XML techniques. The GUI of the final EHRS integrates and visualizes information from various examinations, medical reports, findings and laboratory test results. We conclude that the development of a standardized overarching EHR and an EHRS is feasible using openEHR and W3C standards, enabling a high degree of semantic interoperability. The standardized

  12. 77 FR 28541 - Request for Comments on the Recommendation for the Disclosure of Sequence Listings Using XML...

    Science.gov (United States)

    2012-05-15

    ... the sequence part of the standard, and a second annex setting forth the Document Type Definition (DTD) for the standard. Five rounds of comment/revision have taken place since March 2011, and discussion of... patent data purposes. The XML standard also includes four qualifiers for amino acids. These feature keys...

  13. StreetTiVo: Using a P2P XML Database System to Manage Multimedia Data in Your Living Room

    NARCIS (Netherlands)

    Zhang, Ying; de Vries, A.P.; Boncz, P.; Hiemstra, Djoerd; Ordelman, Roeland J.F.; Li, Qing; Feng, Ling; Pei, Jian; Wang, Sean X.

    StreetTiVo is a project that aims at bringing research results into the living room; in particular, a mix of current results in the areas of Peer-to-Peer XML Database Management System (P2P XDBMS), advanced multimedia analysis techniques, and advanced information re- trieval techniques. The project

  14. XML technologies for the Omaha System: a data model, a Java tool and several case studies supporting home healthcare.

    Science.gov (United States)

    Vittorini, Pierpaolo; Tarquinio, Antonietta; di Orio, Ferdinando

    2009-03-01

    The eXtensible markup language (XML) is a metalanguage which is useful to represent and exchange data between heterogeneous systems. XML may enable healthcare practitioners to document, monitor, evaluate, and archive medical information and services into distributed computer environments. Therefore, the most recent proposals on electronic health records (EHRs) are usually based on XML documents. Since none of the existing nomenclatures were specifically developed for use in automated clinical information systems, but were adapted to such use, numerous current EHRs are organized as a sequence of events, each represented through codes taken from international classification systems. In nursing, a hierarchically organized problem-solving approach is followed, which hardly couples with the sequential organization of such EHRs. Therefore, the paper presents an XML data model for the Omaha System taxonomy, which is one of the most important international nomenclatures used in the home healthcare nursing context. Such a data model represents the formal definition of EHRs specifically developed for nursing practice. Furthermore, the paper delineates a Java application prototype which is able to manage such documents, shows the possibility to transform such documents into readable web pages, and reports several case studies, one currently managed by the home care service of a Health Center in Central Italy.

  15. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  16. XML Storage for Magnetotelluric Transfer Functions: Towards a Comprehensive Online Reference Database

    Science.gov (United States)

    Kelbert, A.; Blum, C.

    2015-12-01

    Magnetotelluric Transfer Functions (MT TFs) represent most of the information about Earth electrical conductivity found in the raw electromagnetic data, providing inputs for further inversion and interpretation. To be useful for scientific interpretation, they must also contain carefully recorded metadata. Making these data available in a discoverable and citable fashion would provide the most benefit to the scientific community, but such a development requires that the metadata is not only present in the file but is also searchable. The most commonly used MT TF format to date, the historical Society of Exploration Geophysicists Electromagnetic Data Interchange Standard 1987 (EDI), no longer supports some of the needs of modern magnetotellurics, most notably accurate error bars recording. Moreover, the inherent heterogeneity of EDI's and other historic MT TF formats has mostly kept the community away from healthy data sharing practices. Recently, the MT team at Oregon State University in collaboration with IRIS Data Management Center developed a new, XML-based format for MT transfer functions, and an online system for long-term storage, discovery and sharing of MT TF data worldwide (IRIS SPUD; www.iris.edu/spud/emtf). The system provides a query page where all of the MT transfer functions collected within the USArray MT experiment and other field campaigns can be searched for and downloaded; an automatic on-the-fly conversion to the historic EDI format is also included. To facilitate conversion to the new, more comprehensive and sustainable, XML format for MT TFs, and to streamline inclusion of historic data into the online database, we developed a set of open source format conversion tools, which can be used for rotation of MT TFs as well as a general XML EDI converter (https://seiscode.iris.washington.edu/projects/emtf-fcu). Here, we report on the newly established collaboration between the USGS Geomagnetism Program and the Oregon State University to gather and

  17. The tissue microarray data exchange specification: A document type definition to validate and enhance XML data

    Science.gov (United States)

    Nohle, David G; Ayers, Leona W

    2005-01-01

    Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be

  18. An XML schema for automated data integration in a Multi-Source Information System dedicated to end-stage renal disease.

    Science.gov (United States)

    Dufour, Eric; Ben Saïd, Mohamed; Jais, Jean Philippe; Le Mignot, Loic; Richard, Jean-Baptiste; Landais, Paul

    2009-01-01

    Data exchange and interoperability between clinical information systems represent a crucial issue in the context of patient record data collection. An XML representation schema adapted to end-stage renal disease (ESRD) patients was developed and successfully tested against patient data in the dedicated Multi-Source Information System (MSIS) active file (more than 16,000 patient records). The ESRD-XML-Schema is organized into Schema subsets respecting the coherence of the clinical information and enriched with coherent data types. Tests are realized against XML-data files generated in conformity with the ESRD-XML Schema. Manual tests allowed the XML schema validation of the data format and content. Programmatic tests allowed the design of generic XML parsing routines, a portable object data model representation and the implementation of automatic data-exchange flows with the MSIS database system. The ESRD-XML-Schema represents a valid framework for data exchange and supports interoperability. Its modular design offers opportunity to simplify physicians' multiple tasks in order to privilege their clinical work.

  19. Gating-ML: XML-based gating descriptions in flow cytometry.

    Science.gov (United States)

    Spidlen, Josef; Leif, Robert C; Moore, Wayne; Roederer, Mario; Brinkman, Ryan R

    2008-12-01

    The lack of software interoperability with respect to gating due to lack of a standardized mechanism for data exchange has traditionally been a bottleneck, preventing reproducibility of flow cytometry (FCM) data analysis and the usage of multiple analytical tools. To facilitate interoperability among FCM data analysis tools, members of the International Society for the Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) have developed an XML-based mechanism to formally describe gates (Gating-ML). Gating-ML, an open specification for encoding gating, data transformations and compensation, has been adopted by the ISAC DSTF as a Candidate Recommendation. Gating-ML can facilitate exchange of gating descriptions the same way that FCS facilitated for exchange of raw FCM data. Its adoption will open new collaborative opportunities as well as possibilities for advanced analyses and methods development. The ISAC DSTF is satisfied that the standard addresses the requirements for a gating exchange standard.

  20. Evaluation of ISO EN 13606 as a result of its implementation in XML.

    Science.gov (United States)

    Austin, Tony; Sun, Shanghua; Hassan, Taher; Kalra, Dipak

    2013-12-01

    The five parts of the ISO EN 13606 standard define a means by which health-care records can be exchanged between computer systems. Starting within the European standardisation process, it has now become internationally ratified in ISO. However, ISO standards do not require that a reference implementation be provided, and in order for ISO EN 13606 to deliver the expected benefits, it must be provided not as a document, but as an operational system that is not vendor specific. This article describes the evolution of an Extensible Markup Language (XML) Schema through three iterations, each of which emphasised one particular approach to delivering an executable equivalent to the printed standard. Developing these operational versions and incorporating feedback from users of these demonstrated where implementation compromises were needed and exposed defects in the standard. These are discussed herein. They may require a future technical revision to ISO EN 13606 to resolve the issues identified.

  1. An XML-Based Manipulation and Query Language for Rule-Based Information

    Science.gov (United States)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  2. XML representation and management of temporal information for web-based cultural heritage applications

    Directory of Open Access Journals (Sweden)

    Fabio Grandi

    2006-01-01

    Full Text Available In this paper we survey the recent activities and achievements of our research group in the deployment of XMLrelated technologies in Cultural Heritage applications concerning the encoding of temporal semantics in Web documents. In particular we will review "The Valid Web", which is an XML/XSL infrastructure we defined and implemented for the definition and management of historical information within multimedia documents available on the Web, and its further extension to the effective encoding of advanced temporal features like indeterminacy, multiple granularities and calendars, enabling an efficient processing in a user-friendly Web-based environment. Potential uses of the developed infrastructures include a broad range of applications in the cultural heritage domain, where the historical perspective is relevant, with potentially positive impacts on E-Education and E-Science.

  3. Automated Individual Prescription of Exercise with an XML-based Expert System.

    Science.gov (United States)

    Jang, S; Park, S R; Jang, Y; Park, J; Yoon, Y; Park, S

    2005-01-01

    Continuously motivating people to exercise regularly is more important than finding a barriers such as lack of time, cost of equipment or gym membership, lack of nearby facilities and poor weather or night-time lighting. Our proposed system presents practicable methods of motivation through a web-based exercise prescription service. Users are instructed to exercise according to their physical ability by means of an automated individual prescription of exercise checked and approved by a personal trainer or exercise specialist after being tested with the HIMS, fitness assessment system. Furthermore, utilizing BIOFIT exercise prescriptions scheduled by an expert system can help users exercise systematically. Automated individual prescriptions are built in XML based documents because the data needs to be flexible, expansible and convertible structures to process diverse exercise templates. Web-based exercise prescription service makes users stay interested in exercise even if they live in many different environments.

  4. Evaluation of ISO EN 13606 as a result of its implementation in XML

    Science.gov (United States)

    Sun, Shanghua; Hassan, Taher; Kalra, Dipak

    2013-01-01

    The five parts of the ISO EN 13606 standard define a means by which health-care records can be exchanged between computer systems. Starting within the European standardisation process, it has now become internationally ratified in ISO. However, ISO standards do not require that a reference implementation be provided, and in order for ISO EN 13606 to deliver the expected benefits, it must be provided not as a document, but as an operational system that is not vendor specific. This article describes the evolution of an Extensible Markup Language (XML) Schema through three iterations, each of which emphasised one particular approach to delivering an executable equivalent to the printed standard. Developing these operational versions and incorporating feedback from users of these demonstrated where implementation compromises were needed and exposed defects in the standard. These are discussed herein. They may require a future technical revision to ISO EN 13606 to resolve the issues identified. PMID:23995217

  5. XML and Graphs for Modeling, Integration and Interoperability:a CMS Perspective

    CERN Document Server

    van Lingen, Frank

    2004-01-01

    This thesis reports on a designer's Ph.D. project called “XML and Graphs for Modeling, Integration and Interoperability: a CMS perspective”. The project has been performed at CERN, the European laboratory for particle physics, in collaboration with the Eindhoven University of Technology and the University of the West of England in Bristol. CMS (Compact Muon Solenoid) is a next-generation high energy physics experiment at CERN, which will start running in 2007. The complexity of such a detector used in the experiment and the autonomous groups that are part of the CMS experiment, result in disparate data sources (different in format, type and structure). Users need to access and exchange data located in multiple heterogeneous sources in a domain-specific manner and may want to access a simple unit of information without having to understand details of the underlying schema. Users want to access the same information from several different heterogeneous sources. It is neither desirable nor fea...

  6. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group

    2007-12-01

    QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org.

  7. MeMo: a hybrid SQL/XML approach to metabolomic data management for functional genomics

    Directory of Open Access Journals (Sweden)

    Hardy Nigel

    2006-06-01

    Full Text Available Abstract Background The genome sequencing projects have shown our limited knowledge regarding gene function, e.g. S. cerevisiae has 5–6,000 genes of which nearly 1,000 have an uncertain function. Their gross influence on the behaviour of the cell can be observed using large-scale metabolomic studies. The metabolomic data produced need to be structured and annotated in a machine-usable form to facilitate the exploration of the hidden links between the genes and their functions. Description MeMo is a formal model for representing metabolomic data and the associated metadata. Two predominant platforms (SQL and XML are used to encode the model. MeMo has been implemented as a relational database using a hybrid approach combining the advantages of the two technologies. It represents a practical solution for handling the sheer volume and complexity of the metabolomic data effectively and efficiently. The MeMo model and the associated software are available at http://dbkgroup.org/memo/. Conclusion The maturity of relational database technology is used to support efficient data processing. The scalability and self-descriptiveness of XML are used to simplify the relational schema and facilitate the extensibility of the model necessitated by the creation of new experimental techniques. Special consideration is given to data integration issues as part of the systems biology agenda. MeMo has been physically integrated and cross-linked to related metabolomic and genomic databases. Semantic integration with other relevant databases has been supported through ontological annotation. Compatibility with other data formats is supported by automatic conversion.

  8. HepML, an XML-based format for describing simulated data in high energy physics

    Science.gov (United States)

    Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.

    2010-10-01

    In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high

  9. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    Science.gov (United States)

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  10. The Managerial Grid; Key Orientations for Achieving Production through People.

    Science.gov (United States)

    Blake, Robert R; Mouton, Jane S.

    The Managerial Grid arranges a concern for production on the horizontal axis and a concern for people on the vertical axis of a coordinate system: 1,1 shows minimum concern for production and people; 9,1 shows major production emphasis and minimum human considerations; 1,9 shows maximum concern for friendly working conditions and minimum…

  11. XSAMS: XML schema for atomic and molecular data and particle solid interaction. Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.

    2009-02-01

    Advanced developments in computer technologies offer exciting opportunities for new distribution tools and applications in various fields of physics. The convenient and reliable exchange of data is clearly an important component of such applications. Therefore, in 2003, the A and M Data Unit initiated within the collaborative efforts of the DCN (Data Centres Network) a new standard for atomic, molecular and particle surface interaction data exchange (AM/PSI) based on XML (eXtensible Markup Language). The schema is named XSAMS which stands for 'XML Schema for Atoms Molecules and Solids'. A working group composed of staff from the IAEA, NIST, ORNL and Observatoire Paris-Meudon meets biannually to discuss progress made on XSAMS, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. Such a meeting was held on 27 October 2008, and the discussions and progress made in the schema are considered within this report. (author)

  12. ART-ML - a novel XML format for the biological procedures modeling and the representation of blood flow simulation.

    Science.gov (United States)

    Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I

    2010-01-01

    The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.

  13. Providing access to risk prediction tools via the HL7 XML-formatted risk web service.

    Science.gov (United States)

    Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil

    2013-07-01

    Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future

  14. An XML-based Schema-less Approach to Managing Diagnostic Data in Heterogeneous Formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O. [Japan Atomic Energy Agency, Ibaraki (Japan)

    2009-07-01

    Managing diagnostic data in heterogeneous formats is always a nuisance, especially when a new diagnostic technique requires a new data structure that does not fit in the existing data format. Ideally, it is best to have an all-purpose schema that can specify any data structures. But devising such a schema is a difficult task and the resultant data management system tends to be large and complicated. As a complementary approach, we can think of a system that has no specific schema but requires each of the data to describe itself without assuming any prior information. In this paper, a very primitive implementation of such a system based on extensible Markup Language (XML) is examined. The actual implementation is no more than an addition of a tiny XML meta-data file that describes the detailed format of the associated diagnostic data file. There are many ways to write and read such meta-data files. For example, if the data are in a standard format that is foreign to the existing system, just specify the name of the format and what interface to use for reading the data. If the data are in a non-standard arbitrary format, write what is written and how into the meta-data file at every occurrence of data output. And as a last resort, if the format of the data is too complicated, a code to read the data can be stored in the meta-data file. Of course, this schema-less approach has some drawbacks, two of which are the doubling of the number of files to be managed and the low performance of data handling, though the former can be a merit, when it is necessary to update the meta-data leaving the body data intact. The important point is that the necessary information to read the data is decoupled from data itself. The merits and demerits of this approach are discussed. This document is composed of an abstract followed by the presentation slides. (author)

  15. Tactical Web Services: Using XML and Java Web Services to Conduct Real-Time Net-Centric Sonar Visualization

    Science.gov (United States)

    2004-09-01

    Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN

  16. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    Science.gov (United States)

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  17. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML Database – Xindice

    Directory of Open Access Journals (Sweden)

    Li Jianling

    2006-01-01

    Full Text Available Abstract Background Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. Results The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Conclusion Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  18. SpecSatisfiabilityTool: A tool for testing the satisfiability of specifications on XML documents

    Directory of Open Access Journals (Sweden)

    Javier Albors

    2015-01-01

    Full Text Available We present a prototype that implements a set of logical rules to prove the satisfiability for a class of specifications on XML documents. Specifications are given by means of constrains built on Boolean XPath patterns. The main goal of this tool is to test whether a given specification is satisfiable or not, and justify the decision showing the execution history. It can also be used to test whether a given document is a model of a given specification and, as a by-product, it permits to look for all the relations (monomorphisms between two patterns and to combine patterns in different ways. The results of these operations are visually shown and therefore the tool makes these operations more understandable. The implementation of the algorithm has been written in Prolog but the prototype has a Java interface for an easy and friendly use. In this paper we show how to use this interface in order to test all the desired properties.

  19. A conceptual basis to encode and detect organic functional groups in XML.

    Science.gov (United States)

    Sankar, Punnaivanam; Krief, Alain; Vijayasarathi, Durairaj

    2013-06-01

    A conceptual basis to define and detect organic functional groups is developed. The basic model of a functional group is termed as a primary functional group and is characterized by a group center composed of one or more group center atoms bonded to terminal atoms and skeletal carbon atoms. The generic group center patterns are identified from the structures of known functional groups. Accordingly, a chemical ontology 'Font' is developed to organize the existing functional groups as well as the new ones to be defined by the chemists. The basic model is extended to accommodate various combinations of primary functional groups as functional group assemblies. A concept of skeletal group is proposed to define the characteristic groups composed of only carbon atoms to be regarded as equivalent to functional groups. The combination of primary functional groups with skeletal groups is categorized as skeletal group assembly. In order to make the model suitable for reaction modeling purpose, a Graphical User Interface (GUI) is developed to define the functional groups and to encode in XML format appropriate to detect them in chemical structures. The system is capable of detecting multiple instances of primary functional groups as well as the overlapping poly-functional groups as the respective assemblies. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Encoding of Fundamental Chemical Entities of Organic Reactivity Interest using chemical ontology and XML.

    Science.gov (United States)

    Durairaj, Vijayasarathi; Punnaivanam, Sankar

    2015-09-01

    Fundamental chemical entities are identified in the context of organic reactivity and classified as appropriate concept classes namely ElectronEntity, AtomEntity, AtomGroupEntity, FunctionalGroupEntity and MolecularEntity. The entity classes and their subclasses are organized into a chemical ontology named "ChemEnt" for the purpose of assertion, restriction and modification of properties through entity relations. Individual instances of entity classes are defined and encoded as a library of chemical entities in XML. The instances of entity classes are distinguished with a unique notation and identification values in order to map them with the ontology definitions. A model GUI named Entity Table is created to view graphical representations of all the entity instances. The detection of chemical entities in chemical structures is achieved through suitable algorithms. The possibility of asserting properties to the entities at different levels and the mechanism of property flow within the hierarchical entity levels is outlined. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Integrating personalized medical test contents with XML and XSL-FO.

    Science.gov (United States)

    Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas

    2011-03-01

    In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.

  2. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  3. Development of XML Schema for Broadband Digital Seismograms and Data Center Portal

    Science.gov (United States)

    Takeuchi, N.; Tsuboi, S.; Ishihara, Y.; Nagao, H.; Yamagishi, Y.; Watanabe, T.; Yanaka, H.; Yamaji, H.

    2008-12-01

    There are a number of data centers around the globe, where the digital broadband seismograms are opened to researchers. Those centers use their own user interfaces and there are no standard to access and retrieve seismograms from different data centers using unified interface. One of the emergent technologies to realize unified user interface for different data centers is the concept of WebService and WebService portal. Here we have developed a prototype of data center portal for digital broadband seismograms. This WebService portal uses WSDL (Web Services Description Language) to accommodate differences among the different data centers. By using the WSDL, alteration and addition of data center user interfaces can be easily managed. This portal, called NINJA Portal, assumes three WebServices: (1) database Query service, (2) Seismic event data request service, and (3) Seismic continuous data request service. Current system supports both station search of database Query service and seismic continuous data request service. Data centers supported by this NINJA portal will be OHP data center in ERI and Pacific21 data center in IFREE/JAMSTEC in the beginning. We have developed metadata standard for seismological data based on QuakeML for parametric data, which has been developed by ETH Zurich, and XML-SEED for waveform data, which was developed by IFREE/JAMSTEC. The prototype of NINJA portal is now released through IFREE web page (http://www.jamstec.go.jp/pacific21/).

  4. New Path Based Index Structure for Processing CAS Queries over XML Database

    Directory of Open Access Journals (Sweden)

    Krishna Asawa

    2017-01-01

    Full Text Available Querying nested data has become one of the most challenging issues for retrieving desired information from the Web. Today diverse applications generate a tremendous amount of data in different formats. These data and information exchanged on the Web are commonly expressed as nested representation such as XML, JSON, etc. Unlike the traditional database system, they don't have a rigid schema. In general, the nested data is managed by storing data and its structures separately which significantly reduces the performance of data retrieving. Ensuring efficiency of processing queries which locates the exact positions of the elements has become a big challenging issue. There are different indexing structures which have been proposed in the literature to improve the performance of the query processing on the nested structure. Most of the past researches on nested structure concentrate on the structure alone. This paper proposes new index structure which combines siblings of the terminal nodes as one path which efficiently processes twig queries with less number of lookups and joins. The proposed approach is compared with some of the existing approaches. The results also show that they are processed with better performance compared to the existing ones.

  5. XML-based clinical data standardisation in the National Health Service Scotland

    Directory of Open Access Journals (Sweden)

    Raluca Bunduchi

    2006-12-01

    Full Text Available The objective of this paper is to clarify the role that socio-economic factors played in shaping the development of XML-based clinical data standards in the National Health Service in Scotland from 2000 to 2004. The paper discusses the NHS Scotland approach to clinical data standardisation, emphasising the actors involved, their choices during the standard development process and the factors that have shaped these choices. The case suggests that the NHS Scotland approach to clinical data standardisation is shaped by strong political pressures for fast development of an integrated electronic patient care system, economic pressures for high efficiency and cost reductions, and organisational requirements for strong clinical support. Such economic, political and organisational pressures explain the informal approach to standard development, the emphasis on fast system development and strong clinical involvement. At the same time, market factors explain the low commitment of the IT vendors, which might have otherwise put significant pressure onNHSScotland to pursue a more formalised standardisation approach within an internationally recognised standard-setting body.

  6. QuakeML: Status of the XML-based Seismological Data Exchange Format

    Science.gov (United States)

    Euchner, Fabian; Schorlemmer, Danijel; Kästli, Philipp; Quakeml Working Group

    2010-05-01

    QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. The current release (version 1.2) is based on a public Request for Comments process that included contributions from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, Nanometrics, and ISTI. QuakeML has mainly been funded through the EC FP6 infrastructure project NERIES, in which it was endorsed as the preferred data exchange format. Currently, QuakeML services are being installed at several institutions around the globe, including EMSC, ORFEUS, ETH, Geoazur (Europe), NEIC, ANSS, SCEC/SCSN (USA), and GNS Science (New Zealand). Some of these institutions already provide QuakeML earthquake catalog web services. Several implementations of the QuakeML data model have been made. QuakePy, an open-source Python-based seismicity analysis toolkit using the QuakeML data model, is being developed at ETH. QuakePy is part of the software stack used in the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center installations, developed by SCEC. Furthermore, the QuakeML data model is part of the SeisComP3 package from GFZ Potsdam. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, seismic inventory, and resource metadata) has been started, but is at an early stage. Contributions from the community that help to widen the thematic coverage of QuakeML are highly welcome. Online resources: http://www.quakeml.org, http://www.quakepy.org

  7. RSS (http://www.iaees.org/publications/journals/arthropods/rss.xml

    Directory of Open Access Journals (Sweden)

    Arthropods (ISSN 2224-4255

    Full Text Available Arthropods ISSN 2224-4255 URL: http://www.iaees.org/publications/journals/arthropods/online-version.asp RSS: http://www.iaees.org/publications/journals/arthropods/rss.xml E-mail: arthropods@iaees.org Editor-in-Chief: WenJun Zhang Aims and Scope ARTHROPODS (ISSN 2224-4255 is an international journal devoted to the publication of articles on various aspects of arthropods, e.g., ecology, biogeography, systematics, biodiversity (species diversity, genetic diversity, et al., conservation, control, etc. The journal provides a forum for examining the importance of arthropods in biosphere (both terrestrial and marine ecosystems and human life in such fields as agriculture, forestry, fishery, environmental management and human health. The scope of Arthropods is wide and embraces all arthropods-insects, arachnids, crustaceans, centipedes, millipedes, and other arthropods. Articles/short communications on new taxa (species, genus, families, orders, etc. and new records of arthropods are particularly welcome. Authors can submit their works to the email box of this journal, arthropods@iaees.org. All manuscripts submitted to this journal must be previously unpublished and may not be considered for publication elsewhere at any time during review period of this journal. Authors are asked to read Author Guidelines before submitting manuscripts. In addition to free submissions from authors around the world, special issues are also accepted. The organizer of a special issue can collect submissions (yielded from a research project, a research group, etc. on a specific research topic, or submissions of a scientific conference for publication of special issue.

  8. QuakeML: status of the XML-based seismological data exchange format

    Directory of Open Access Journals (Sweden)

    Joachim Saul

    2011-04-01

    Full Text Available QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2 is based on a public Request for Comments process and accounts for suggestions and comments provided by a broad international user community. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, ground motion, seismic inventory, and resource metadata has been started, but is at an early stage. Several applications based on the QuakeML data model have been created so far. Among these are earthquake catalog web services at the European Mediterranean Seismological Centre (EMSC, GNS Science, and the Southern California Earthquake Data Center (SCEDC, and QuakePy, an open-source Python-based seismicity analysis toolkit. Furthermore, QuakeML is being used in the SeisComP3 system from GFZ Potsdam, and in the Collaboratory for the Study of Earthquake Predictability (CSEP testing center installations, developed by Southern California Earthquake Center (SCEC. QuakeML is still under active and dynamic development. Further contributions from the community are crucial to its success and are highly welcome.

  9. RSS (http://www.iaees.org/publications/journals/environsc/rss.xml

    Directory of Open Access Journals (Sweden)

    Environmental Skeptics and Critics (ISSN 2224-4263

    Full Text Available Environmental Skeptics and Critics ISSN 2224-4263 URL: http://www.iaees.org/publications/journals/environsc/online-version.asp RSS: http://www.iaees.org/publications/journals/environsc/rss.xml E-mail: environsc@iaees.org Editor-in-Chief: WenJun Zhang Aims and Scope The more truth is debated, the clearer it becomes. Science will not proceed without debate and controversy. Wide and in-depth debate and controversy on human's knowledge, attitudes, policies and practices on the environment determines the future of our planet. There are numerous controversial and potentially controversial issues on environmental sciences and practices. ENVIRONMENTAL SKEPTICS and CRITICS (ISSN 2224-4263 is an international journal devoted to the publication of skeptical and critical articles/short communications/letters on theories, viewpoints, methodologies, practices, policies, etc., in ecological and environmental areas. The journal provides a forum for questioning, disputing, arguing, challenging, criticizing and judging known theories, methdologies, practices, and policies, etc., or presenting different ideas. The scope of Environmental Skeptics and Critics is wide and embraces all controversial, non-conclusive or unexplained issues in ecological and environmental areas. Authors can submit their works to the email box of this journal, environsc@iaees.org. All manuscripts submitted to this journal must be previously unpublished and may not be considered for publication elsewhere at any time during review period of this journal. Authors are asked to read Author Guidelines before submitting manuscripts. In addition to free submissions from authors around the world, special issues are also accepted. The organizer of a special issue can collect submissions (yielded from a research project, a research group, etc. on a specific research topic, or submissions of a scientific conference for publication of special issue.

  10. SU-E-T-48: Automated Quality Assurance for XML Controlled Linacs

    International Nuclear Information System (INIS)

    Valdes, G; Morin, O; Pouliot, J; Chuang, C

    2014-01-01

    Purpose: To automate routine imaging QA procedures so that complying with TG 142 and TG 179 can be efficient and reliable. Methods: Two QA tests for a True Beam Linac were automatized. A Winston Lutz test as described by Lutz et al 1 using the Winston Lutz test kit from BrainLab, Germany and a CBCT Image Quality test as described in TG 179 using the EMMA phantom, Siemens Medical Physics, Germany were performed in our True Beam. For each QA procedure tested, a 3 step paradigm was used. First, the data was automatically acquired using True Beam Developer Mode and XML scripting. Second, the data acquired in the first step was automatically processed using in-home grown Matlab GUIs. Third, Machine Learning algorithms were used to automatically classify the processed data and reports generated. Results: The Winston Luzt test could be performed by an experienced medical physicist in 29.0 ± 8.0 min. The same test, if automated using our paradigm, could be performed in 3.0 ± 0.1 min. In the same lieu, time could be substantially saved for image quality tests. In this case, the amount of time saved will depend on the phantoms used and the initial localization method. Additionally, machine learning algorithms could automatically identify the roots of the problems if any and possibly help reduce machine down time. Conclusion: Modern linear accelerators are equipped with advanced 2D and 3D imaging that are used for patient alignment substantially improving IGRT protocols. However, this extra complexity exponentially increases the number of QA tests needed. Using the new paradigm described above, not only bare minimum but best practice QA programs could be implemented with the same manpower. This work is supported by Varian, Palo Alto, CA

  11. Upgrading a TCABR data analysis and acquisition system for remote participation using Java, XML, RCP and modern client/server communication/authentication

    International Nuclear Information System (INIS)

    Sa, W.P. de

    2010-01-01

    The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the eXtensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks'.

  12. Upgrading a TCABR data analysis and acquisition system for remote participation using Java, XML, RCP and modern client/server communication/authentication

    Energy Technology Data Exchange (ETDEWEB)

    Sa, W.P. de, E-mail: pires@if.usp.b [Instituto de Fisica, Universidade de Sao Paulo, Rua do Matao, Travessa R, 187 CEP 05508-090 Cidade Universitaria, Sao Paulo (Brazil)

    2010-07-15

    The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the eXtensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks'.

  13. Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam

    International Nuclear Information System (INIS)

    Asiev, Krum; Mullins, Joel; DeBlois, François; Liang, Liheng; Syme, Alasdair

    2014-01-01

    Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. A manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm

  14. A general XML schema and SPM toolbox for storage of neuro-imaging results and anatomical labels.

    Science.gov (United States)

    Keator, David Bryant; Gadde, Syam; Grethe, Jeffrey S; Taylor, Derek V; Potkin, Steven G

    2006-01-01

    With the increased frequency of multisite, large-scale collaborative neuro-imaging studies, the need for a general, self-documenting framework for the storage and retrieval of activation maps and anatomical labels becomes evident. To address this need, we have developed and extensible markup language (XML) schema and associated tools for the storage of neuro-imaging activation maps and anatomical labels. This schema, as part of the XML-based Clinical Experiment Data Exchange (XCEDE) schema, provides storage capabilities for analysis annotations, activation threshold parameters, and cluster and voxel-level statistics. Activation parameters contain information describing the threshold, degrees of freedom, FWHM smoothness, search volumes, voxel sizes, expected voxels per cluster, and expected number of clusters in the statistical map. Cluster and voxel statistics can be stored along with the coordinates, threshold, and anatomical label information. Multiple threshold types can be documented for a given cluster or voxel along with the uncorrected and corrected probability values. Multiple atlases can be used to generate anatomical labels and stored for each significant voxel or cluter. Additionally, a toolbox for Statistical Parametric Mapping software (http://www. fil. ion.ucl.ac.uk/spm/) was created to capture the results from activation maps using the XML schema that supports both SPM99 and SPM2 versions (http://nbirn.net/Resources/Users/ Applications/xcede/SPM_XMLTools.htm). Support for anatomical labeling is available via the Talairach Daemon (http://ric.uthscsa. edu/projects/talairachdaemon.html) and Automated Anatomical Labeling (http://www. cyceron.fr/freeware/).

  15. XML Schema for Atoms, Molecules and Solids (XSAMS). Summary Report of an IAEA Consultants’ Meeting. Proceedings of a conference

    International Nuclear Information System (INIS)

    Braams, B.J.

    2013-12-01

    A Consultants’ Meeting on “XML Schema for Atoms, Molecules and Solids (XSAMS)” was held in conjunction with the Virtual Atomic and Molecular Data Centre (VAMDC) Cycle Three Project Meeting on the Campus of the University of Vienna on 20-22 February 2012. The meeting was to agree on the adoption of an international standard XSAMS version 1.0 and to discuss implementation activities and user experience with the schema. The proceedings of the meeting are summarized here. (author)

  16. Towards an Ontology for the Global Geodynamics Project: Automated Extraction of Resource Descriptions from an XML-Based Data Model

    Science.gov (United States)

    Lumb, L. I.; Aldridge, K. D.

    2005-12-01

    Using the Earth Science Markup Language (ESML), an XML-based data model for the Global Geodynamics Project (GGP) was recently introduced [Lumb & Aldridge, Proc. HPCS 2005, Kotsireas & Stacey, eds., IEEE, 2005, 216-222]. This data model possesses several key attributes -i.e., it: makes use of XML schema; supports semi-structured ASCII format files; includes Earth Science affinities; and is on track for compliance with emerging Grid computing standards (e.g., the Global Grid Forum's Data Format Description Language, DFDL). Favorable attributes notwithstanding, metadata (i.e., data about data) was identified [Lumb & Aldridge, 2005] as a key challenge for progress in enabling the GGP for Grid computing. Even in projects of small-to-medium scale like the GGP, the manual introduction of metadata has the potential to be the rate-determining metric for progress. Fortunately, an automated approach for metadata introduction has recently emerged. Based on Gleaning Resource Descriptions from Dialects of Languages (GRDDL, http://www.w3.org/2004/01/rdxh/spec), this bottom-up approach allows for the extraction of Resource Description Format (RDF) representations from the XML-based data model (i.e., the ESML representation of GGP data) subject to rules of transformation articulated via eXtensible Stylesheet Language Transformations (XSLT). In addition to introducing relationships into the GGP data model, and thereby addressing the metadata requirement, the syntax and semantics of RDF comprise a requisite for a GGP ontology - i.e., ``the common words and concepts (the meaning) used to describe and represent an area of knowledge'' [Daconta et al., The Semantic Web, Wiley, 2003]. After briefly reviewing the XML-based model for the GGP, attention focuses on the automated extraction of an RDF representation via GRDDL with XSLT-delineated templates. This bottom-up approach, in tandem with a top-down approach based on the Protege integrated development environment for ontologies (http

  17. RSS (http://www.iaees.org/publications/journals/ces/rss.xml

    Directory of Open Access Journals (Sweden)

    Computational Ecology and Software (ISSN 2220-721X

    Full Text Available Computational Ecology and Software ISSN 2220-721X URL: http://www.iaees.org/publications/journals/ces/online-version.asp RSS: http://www.iaees.org/publications/journals/ces/rss.xml E-mail: ces@iaees.org Editor-in-Chief: WenJun Zhang Aims and Scope COMPUTATIONAL ECOLOGY AND SOFTWARE (ISSN 2220-721X is an open access, peer-reviewed online journal that considers scientific articles in all different areas of computational ecology. It is the transactions of the International Society of Computational Ecology. The journal is concerned with the ecological researches, constructions and applications of theories and methods of computational sciences including computational mathematics, computational statistics and computer science. It features the simulation, approximation, prediction, recognition, and classification of ecological issues. Intensive computation is one of the major stresses of the journal. The journal welcomes research articles, short communications, review articles, perspectives, and book reviews. The journal also supports the activities of the International Society of Computational Ecology. The topics to be covered by CES include, but are not limited to: •Computation intensive methods, numerical and optimization methods, differential and difference equation modeling and simulation, prediction, recognition, classification, statistical computation (Bayesian computing, randomization, bootstrapping, Monte Carlo techniques, stochastic process, etc., agent-based modeling, individual-based modeling, artificial neural networks, knowledge based systems, machine learning, genetic algorithms, data exploration, network analysis and computation, databases, ecological modeling and computation using Geographical Information Systems, satellite imagery, and other computation intensive theories and methods. •Artificial ecosystems, artificial life, complexity of ecosystems and virtual reality. •The development, evaluation and validation of software and

  18. Multi-arrhythmias detection with an XML rule-based system from 12-Lead Electrocardiogram.

    Science.gov (United States)

    Khelassi, Abdeldjalil; Yelles-Chaouche, Sarra-Nassira; Benais, Faiza

    2017-05-01

    The computer-aided detection of cardiac arrhythmias stills a crucial application in medical technologies. The rule based systems RBS ensure a high level of transparency and interpretability of the obtained results. To facilitate the diagnosis of the cardiologists and to reduce the uncertainty made in this diagnosis. In this research article, we have realized a classification and automatic recognition of cardiac arrhythmias, by using XML rules that represent the cardiologist knowledge. Thirteen experiments with different knowledge bases were realized for improving the performance of the used method in the detection of 13 cardiac arrhythmias. In the first 12 experiments, we have designed a specialized knowledge base for each cardiac arrhythmia, which contains just one arrhythmia detection rule. In the last experiment, we applied the knowledge base which contains rules of 12 arrhythmias. We used, for the experiments, an international data set with 279 features and 452 records characterizing 12 leads of ECG signal and social information of patients. The data sets were constructed and published at Bilkent University of Ankara, Turkey. In addition, the second version of the self-developed software "XMLRULE" was used; the software can infer more than one class and facilitate the interpretability of the obtained results. The 12 first experiments give 82.80% of correct detection as the mean of all experiments, the results were between 19% and 100% with a low rate in just one experiment. The last experiment in which all arrhythmias are considered, the results of correct detection was 38.33% with 90.55% of sensibility and 46.24% of specificity. It was clearly show that in these results the good choice of the classification model is very beneficial in terms of performance. The obtained results were better than the published results with other computational methods for the mono class detection, but it was less in multi-class detection. The RBS is the most transparent method for

  19. CWRML: representing crop wild relative conservation and use data in XML.

    Science.gov (United States)

    Moore, Jonathan D; Kell, Shelagh P; Iriondo, Jose M; Ford-Lloyd, Brian V; Maxted, Nigel

    2008-02-25

    Crop wild relatives are wild species that are closely related to crops. They are valuable as potential gene donors for crop improvement and may help to ensure food security for the future. However, they are becoming increasingly threatened in the wild and are inadequately conserved, both in situ and ex situ. Information about the conservation status and utilisation potential of crop wild relatives is diverse and dispersed, and no single agreed standard exists for representing such information; yet, this information is vital to ensure these species are effectively conserved and utilised. The European Community-funded project, European Crop Wild Relative Diversity Assessment and Conservation Forum, determined the minimum information requirements for the conservation and utilisation of crop wild relatives and created the Crop Wild Relative Information System, incorporating an eXtensible Markup Language (XML) schema to aid data sharing and exchange. Crop Wild Relative Markup Language (CWRML) was developed to represent the data necessary for crop wild relative conservation and ensure that they can be effectively utilised for crop improvement. The schema partitions data into taxon-, site-, and population-specific elements, to allow for integration with other more general conservation biology schemata which may emerge as accepted standards in the future. These elements are composed of sub-elements, which are structured in order to facilitate the use of the schema in a variety of crop wild relative conservation and use contexts. Pre-existing standards for data representation in conservation biology were reviewed and incorporated into the schema as restrictions on element data contents, where appropriate. CWRML provides a flexible data communication format for representing in situ and ex situ conservation status of individual taxa as well as their utilisation potential. The development of the schema highlights a number of instances where additional standards-development may

  20. RSS (http://www.iaees.org/publications/journals/nb/rss.xml

    Directory of Open Access Journals (Sweden)

    Network Biology (ISSN 2220-8879

    Full Text Available Network Biology ISSN 2220-8879 URL: http://www.iaees.org/publications/journals/nb/online-version.asp RSS: http://www.iaees.org/publications/journals/nb/rss.xml E-mail: networkbiology@iaees.org Editor-in-Chief: WenJun Zhang Aims and Scope NETWORK BIOLOGY (ISSN 2220-8879; CODEN NBEICS is an open access, peer-reviewed international journal that considers scientific articles in all different areas of network biology. It is the transactions of the International Society of Network Biology. It dedicates to the latest advances in network biology. The goal of this journal is to keep a record of the state-of-the-art research and promote the research work in these fast moving areas. The topics to be covered by Network Biology include, but are not limited to: •Theories, algorithms and programs of network analysis •Innovations and applications of biological networks •Ecological networks, food webs and natural equilibrium •Co-evolution, co-extinction, biodiversity conservation •Metabolic networks, protein-protein interaction networks, biochemical reaction networks, gene networks, transcriptional regulatory networks, cell cycle networks, phylogenetic networks, network motifs •Physiological networks •Network regulation of metabolic processes, human diseases and ecological systems •Social networks, epidemiological networks •System complexity, self-organized systems, emergence of biological systems, agent-based modeling, individual-based modeling, neural network modeling, and other network-based modeling, etc. We are also interested in short communications that clearly address a specific issue or completely present a new ecological network, food web, or metabolic or gene network, etc. Authors can submit their works to the email box of this journal, networkbiology@iaees.org. All manuscripts submitted to this journal must be previously unpublished and may not be considered for publication elsewhere at any time during review period of this journal

  1. XML Based Business-to-Business E-Commerce Frameworks%基于XML的B2B电子商务构架

    Institute of Scientific and Technical Information of China (English)

    范国闯; 刘庆文; 李京; 钟华

    2002-01-01

    The B2B (Business-to-Business)e-commerce framework solves the key problem-interoperability between enterprise during e-commerce transactions.Firstly,this paper presents several key factors of B2B e-commerce framework by analyzing the role of frameworks.Moreover,this paper analyzeds and compares several international popular B2B frameworks from from the point of view of these factors.Finally,this paper proposes the design principles,objectives and e-commerce transaction language of cnXML (Chiese e-Commerce XML)frameworks.

  2. LUNARINFO:A Data Archiving and Retrieving System for the Circumlunar Explorer Based on XML/Web Services

    Institute of Scientific and Technical Information of China (English)

    ZUO Wei; LI Chunlai; OUYANG Ziyuan; LIU Jianjun; XU Tao

    2004-01-01

    It is essential to build a modem information management system to store and manage data of our circumlunar explorer in order to realize the scientific objectives. It is difficult for an information system based on traditional distributed technology to communicate information and work together among heterogeneous systems in order to meet the new requirement of Intemet development. XML and Web Services, because of their open standards and self-containing properties, have changed the mode of information organization and data management. Now they can provide a good solution for building an open, extendable, and compatible information management system, and facilitate interchanging and transferring of data among heterogeneous systems. On the basis of the three-tiered browse/server architectures and the Oracle 9i Database as an information storage platform, we have designed and implemented a data archiving and retrieval system for the circumlunar explorer-LUNARINFO. We have also successfully realized the integration between LUNARINFO and the cosmic dust database system. LUNARINFO consists of five function modules for data management, information publishing, system management, data retrieval, and interface integration. Based on XML and Web Services, it not only is an information database system for archiving, long-term storing, retrieving and publication of lunar reference data related to the circumlunar explorer, but also provides data web Services which can be easily developed by various expert groups and connected to the common information system to realize data resource integration.

  3. XSAMS: XML schema for atomic and molecular data and particle solid interactions. Summary report of an IAEA consultants' meeting

    International Nuclear Information System (INIS)

    Humbert, D.; Braams, B.J.

    2010-01-01

    Developments in computer technology offer exciting new opportunities for the reliable and convenient exchange of data. Therefore, in 2003 the Atomic and Molecular Data Unit initiated within the collaborative efforts of the A+M Data Centres Network a new standard for exchange of atomic, molecular and particle-solid interaction (AM/PSI) data based on the Extended Markup Language (XML). The standard is named XSAMS, which stands for XML Schema for Atoms, Molecules, and Solids. A working group composed of staff from the IAEA, NIST, ORNL, Observatoire Paris-Meudon and other institutions meets approximately biannually to discuss progress made on XSAMS, and to foresee new developments and actions to be taken to promote this standard for AM/PSI data exchange. Such a meeting was held 10-11 September 2009 at IAEA Headquarters, Vienna, and the discussions and results of the meeting are presented here. The principal concern of the meeting was the preparation of the first public release, version 0.1, of XSAMS. (author)

  4. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  5. 文件物件模型及其在XML文件處理之應用 Document Object Model and Its Application on XML Document Processing

    Directory of Open Access Journals (Sweden)

    Sinn-cheng Lin

    2001-06-01

    Full Text Available 無Document Object Model (DOM is an application-programming interface that can be applied to process XML documents. It defines the logical structure, the accessing interfaces and the operation methods for the document. In the DOM, an original document is mapped to a tree structure. Therefore ,the computer program can easily traverse the tree manipulate the nodes in the tree. In this paper, the fundamental models, definitions and specifications of DOM are surveyed. Then we create an experimenta1 system of DOM called XML On-Line Parser. The front-end of the system is built by the Web-based user interface for the XML document input and the parsed result output. On the other hand, the back-end of the system is built by an ASP program, which transforms the original document to DOM tree for document manipulation. This on-line system can be linked with a general-purpose web browser to check the well-formedness and the validity of the XML documents.

  6. Upgrading a TCABR Data Analysis and Acquisition System for Remote Participation Using Java, XML, RCP and Modern Client/Server Communication/Authentication

    Energy Technology Data Exchange (ETDEWEB)

    De Sa, W. [University of Sao Paulo - Institute of Physics - Plasma Physics Laboratory, Sao Paulo (Brazil)

    2009-07-01

    Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way of control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and it is difficult to follow the work remotely with private system. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are very complex and with a large variability of components, requirement and specification solutions need to be flexible and modular, independent from operating system and computers architecture. To describe and to organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers use Remote Procedure Call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows developing easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration and the Web application allows a simple Graphical User Interface (GUI) access. TCABR tokamak team collaborating with the CFN (Nuclear Fusion Center, Technical University of Lisbon) are implementing this Remote Participation technologies that are going to be tested at the Joint Experiment on TCABR (TCABR-JE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on

  7. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  8. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  9. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  10. APROXIMACIÓN A LA REPRESENTACIÓN EN XML DE OBJETOS DICOM PARA FOTOGRAFÍA MÉDICA DIGITAL

    Directory of Open Access Journals (Sweden)

    Carlos Ruiz

    2007-12-01

    Full Text Available El estándar DICOM (Digital Imaging and Communication in Medicine es un protocolo no propietario para el intercambio de información médica. DICOM representa y define la información de objetos del mundo real tales como una resonancia magnética (MRI, una tomografía computarizada (CT y una fotografía médica digital (VL Photographic, por medio de definiciones de objeto de información llamados IOD. El presente artículo describe una metodología para representar el IOD de una fotografía médica digital de luz visible (VL Photographic Image por intermedio de documentos XML Schema. Estos documentos se utilizan en la creación y validación de documentos XML para representar información clínica técnica asociada a fotografías médicas digitales para su posterior implementación en una aplicación web de teledermatología.DICOM standard (Digital Imaging and Communication in Medicine is a non-proprietary protocol for the medical exchange information. DICOM represents and defines the information of real world objects like a magnetic resonance image (MRI, a computerized axial tomography and a digital medical photography (VL photographic, through information object definitions called IOD. The present article describes a methodology to represent the IOD of a digital medical photography of visible light (VL Photographic Image through XML Schema documents. These documents were used in the creation and validation of XML documents to represent digital medical photographies compiling clinical and technical information for their later implementation in a teledermatology application.

  11. Integrating Top-down and Bottom-up Cybersecurity Guidance using XML.

    Science.gov (United States)

    Lubell, Joshua

    2016-08-01

    This paper describes a markup-based approach for synthesizing disparate information sources and discusses a software implementation of the approach. The implementation makes it easier for people to use two complementary, but differently structured, guidance specifications together: the (top-down) Cybersecurity Framework and the (bottom-up) National Institute of Standards and Technology Special Publication 800-53 security control catalog. An example scenario demonstrates how the software implementation can help a security professional select the appropriate safeguards for restricting unauthorized access to an Industrial Control System. The implementation and example show the benefits of this approach and suggest its potential application to disciplines other than cybersecurity.

  12. Integrating Top-down and Bottom-up Cybersecurity Guidance using XML

    Science.gov (United States)

    Lubell, Joshua

    2016-01-01

    This paper describes a markup-based approach for synthesizing disparate information sources and discusses a software implementation of the approach. The implementation makes it easier for people to use two complementary, but differently structured, guidance specifications together: the (top-down) Cybersecurity Framework and the (bottom-up) National Institute of Standards and Technology Special Publication 800-53 security control catalog. An example scenario demonstrates how the software implementation can help a security professional select the appropriate safeguards for restricting unauthorized access to an Industrial Control System. The implementation and example show the benefits of this approach and suggest its potential application to disciplines other than cybersecurity. PMID:27795810

  13. XTCE (XML Telemetric and Command Exchange) Standard Making It Work at NASA. Can It Work For You?

    Science.gov (United States)

    Munoz-Fernandez, Michela; Smith, Danford S.; Rice, James K.; Jones, Ronald A.

    2017-01-01

    The XML Telemetric and Command Exchange (XTCE) standard is intended as a way to describe telemetry and command databases to be exchanged across centers and space agencies. XTCE usage has the potential to lead to consolidation of the Mission Operations Center (MOC) Monitor and Control displays for mission cross-support, reducing equipment and configuration costs, as well as a decrease in the turnaround time for telemetry and command modifications during all the mission phases. The adoption of XTCE will reduce software maintenance costs by reducing the variation between our existing mission dictionaries. The main objective of this poster is to show how powerful XTCE is in terms of interoperability across centers and missions. We will provide results for a use case where two centers can use their local tools to process and display the same mission telemetry in their MOC independently of one another. In our use case we have first quantified the ability for XTCE to capture the telemetry definitions of the mission by use of our suite of support tools (Conversion, Validation, and Compliance measurement). The next step was to show processing and monitoring of the same telemetry in two mission centers. Once the database was converted to XTCE using our tool, the XTCE file became our primary database and was shared among the various tool chains through their XTCE importers and ultimately configured to ingest the telemetry stream and display or capture the telemetered information in similar ways.Summary results include the ability to take a real mission database and real mission telemetry and display them on various tools from two centers, as well as using commercially free COTS.

  14. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    International Nuclear Information System (INIS)

    Nguyen, N; Knutson, N; Schmidt, M; Price, M

    2016-01-01

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, a method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.

  15. PEMBUATAN PROTOTIPE APLIKASI WEB SERVICES BERBASIS XML MENGGUNAKAN TEKNOLOGI J2EE DENGAN STUDI KASUS RESERVASI HOTEL

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dalam era globalisasi, para pelaku bisnis secara intensif melakukan usaha-usaha untuk memasuki pasar global. Suatu perusahaan semakin membutuhkan transaksi  bisnis yang bersifat fleksibel, yang bisa dilakukan dengan siapa saja, kapan saja dan dimana saja. Tentunya sistem informasi yang dimiliki perusahaan tersebut harus bisa berkomunikasi dengan sistem yang dimiliki oleh patner bisnis, tanpa harus terlalu banyak perjanjian dan persetujuan. Hal ini berarti diperlukan standard infrastruktur sederhana untuk pertukaran data bisnis.Kebutuhan ini dapat dipenuhi oleh teknologi web service sebagai teknologi yang menyediakan infrastruktur sederhana bagi pelaku bisnis untuk berkomunikasi melalui pertukaran pesan XML. Pada Penelitian ini dikembangkan sebuah prototipe aplikasi web service dengan studi kasus reservasi hotel melalui perantara broker. Studi kasus ini dipilih karena dapat merepresentasikan sistem yang terdistribusi. Dimana broker berperan sebagai penghubung antara customer dan beberapa sistem yang terdistribusi.Pada pembuatan aplikasi ini dipilih teknologi J2EE karena framework J2EE yang telah ada mendukung penerapan web service. Dan selain itu, J2EE bersifat netral terhadap berbagai macam platform (tidak

  16. Criação de um tradutor XML para a linguagem de marcações sobre emoção EmotionML

    Directory of Open Access Journals (Sweden)

    Marcelo Nichele

    2013-11-01

    Full Text Available Este artigo visa apresentar e descrever a criação de um tradutor XML para a linguagem EmotionML 1.0. EmotionML é uma linguagem de marcação criada para padronizar a representação das emoções em computadores. O tradutor deve ser capaz de: (i identificar os elementos da EmotionML, em um documento XML, e retorná-los como objetos instanciados; (ii gerar dinamicamente classes a partir da gramática definida para a linguagem EmotionML para a instanciação de objetos; (iii gerar arquivos EmotionML a partir de objetos EmotionML instanciados. Dessa forma, o tradutor proposto pode ser usado nos mais variados tipos de aplicações em Computação Afetiva que envolvam inferência, expressão ou síntese de emoções. A utilização do tradutor permite a um sistema computacional afetivo recuperar informações mantidas em arquivos, modificar os dados do arquivo em tempo real, assim como armazenar novamente as informações no formato EmotionML para futuros acessos ou modificações.

  17. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    Science.gov (United States)

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... to bulls to see if next-generation cows produce human milk proteins (Davidman 1996); ... by indigenous peoples) to use as raw materials in developing new drugs. ...... provided both villages with dietary nutrients, particularly protein and fats.

  19. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Éléments d'une approche intégrée de la gestion des terres ..... judicieuse du potentiel qu'offre chaque unité territoriale pour répondre d'une manière durable aux ..... Les problèmes liés à la sécurité alimentaire et à la pauvreté en milieu rural ...

  20. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... de la diversité entrave considérablement la pérennité et la sécurité alimentaire. ...... Par cette loi, le gouvernement colombien a élargi sa mer territoriale et établi les ..... Ce groupe reconnaissait les avantages d'une approche régionale de ...

  1. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Source: Data from the Food and Agriculture Organization of the United Nations, as cited in Huston (1994). ...... However, owing to the high levels of chemical inputs, local contamination of the environment is much more ..... Selangor, Malaysia.

  2. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    L'Afrique subsaharienne est la seule région du monde où la production .... spécial de Hunger Notes : « Urban food production: neglected resource for food and jobs ...... et des vecteurs et évaluer la résistance des cultures à la contamination.

  3. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Le gouvernement con-sidérait l'éducation comme une priorité de la stratégie ...... Fondé en 1952, le Servicio Nacional de Salud ( SNS ) assurait la plupart de ces ...... y Política de la Reforma de la Seguridad Social en Salud, Santiago ( Chili ) ..... Santiago ( Chili ), Centro de Estudios Públicos, Documento de Trabajo no. 25.

  4. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    The Servicio Nacional de Salud (SNS, national health service), established ...... Proyecto Ministerio de Salud sobre Evaluación de la Factibilidad Económica y Política de la Reforma de la Seguridad Social en Salud. ... Documento de Trabajo.

  5. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    and the. International Development Research Centre PO Box 8500. Ottawa ...... et al (1995) find comparable results in Nigeria, Mexico, Uganda and Brazil. ...... or airborne diseases, or typhoid, cholera, malaria, tuberculosis, and chicken pox.

  6. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Declaration of Principles of the World Council of Indigenous Peoples ... and traditional technologies based on the knowledge, innovations, and practices ...... the company later changed its name to avoid paying taxes and could not be traced. ...... the Protection and Conservation of Indigenous Knowledge, Sabah, Malaysia, ...

  7. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Another request in this period provided the catalyst for the current practice of ..... by the Canadians were the staples of the subject: taxation, borrowing, equalization, ..... Basically, even if it had in principle been possible to accomplish, the task ...... Malaysia, Poland, New Jersey, Texas, and a county in the United Kingdom.

  8. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    added tax levied on imported tropical timber alone (Counsell et al. ..... 60% of logging concessions in the Malaysian state of Sarawak were owned or run by ...... The first FSC principle is of special relevance to illegal logging and trade practices.

  9. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    The Crucible Group was served by a Management Committee; it was ..... In this topic, we focus on the supply end of the access relationship. ...... In existing trade secrets law, it can apply to business plans, client lists, formulas, and so on.

  10. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    The book links together the writing contributions of different members of the CAMPlab .... In their view, the traditional scientific method is not the best means for ...... in the care and maintenance of crops, especially when men are engaged in the ...

  11. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    This includes initiatives for down-scaling modern technology to suit different levels of scale ..... Their effective introduction, use, and maintenance require sophisticated ... Many of the methods for providing basic infrastructure are already on the ...

  12. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... (IDRC) as part of its review of corporate programming options for 2005-10. .... The government's divide-and-rule strategy included persuading CONTAG to ...... it could also lead to exploring market/public/community policy mixes that could ...

  13. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Institut international de recherche sur les politiques alimentaires ( IFPRI ), É.-U. ...... Par conséquent, la meilleure stratégie pour les populations autochtones et leurs ...... de recruter contre leur gré des Autochtones dans leurs forces armées, ...

  14. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    La stratégie multidimensionnelle de réduction de l'usage du tabac a porté ..... Le rôle des autorités politiques — Le Canada a montré l'importance cruciale du ...... adultes et que le recrutement de ces millions de fumeurs éventuels constitue ...

  15. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    La politique de la science et de la technologie applicable aux TIC et au développement ..... Le défi pour une stratégie en matière de TIC consiste à faire en sorte que les ...... Les décideurs de ces pays auront besoin de recruter des personnes ...

  16. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    de recommander aux gouvernements nationaux des politiques en matière de ...... à donner suite aux politiques exigeant le recrutement de professionnelles en ...... Le nouveau rapport, intitulé Sauver la planète — Stratégie pour l'avenir de la ...

  17. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Quel est le lien entre certains programmes et le contexte politique national dans ... les besoins du Mouvement démocratique et de concevoir une stratégie efficace et ...... Le même responsable a parlé du talent de M. Johnson à recruter les ...

  18. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Building Learning and Reflection into Development Programs ... Will Outcome Mapping Provide the Appropriate Monitoring System? .... Although individuals being treated by social service providers in the United States face different constraints and .... In the literature, there are few good examples where this has been done.

  19. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    They need better access to information and the ability to disseminate it to citizens ... and other similar service providers can establish customer-service centres in different ... ICTs offer people with disabilities and learning difficulties access to ...

  20. Federal Register in XML

    Data.gov (United States)

    National Archives and Records Administration — The Federal Register is the official daily publication for rules, proposed rules, and notices of Federal agencies and organizations, as well as executive orders and...

  1. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Although the economic importance of genetic resources is now widely .... community about the issue of how access systems will impact on research and development. ...... For example, the National Cancer Institute's Development Therapeutics ...

  2. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Preparation for statistical analysis: Measures of dispersion, normal distribution and ... The HSRmodules are used in the Community Health and Social Science ... Qualitative research methods have also been given more weight and they were ...

  3. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    simply runs off fields or through industrial plants and back into the waterways. ...... Management decisions on water quantities, production, and the supply system ... Obviously, the institutional design of the Israeli water system reflects the strong ...... and manufacturers are left free to determine how best to achieve that level.

  4. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    The audiovisual sector was excluded from the North American Free Trade ..... Scotia Bank – AUCC Awards for Excellence in Internationalization. ...... University terms of employment usually allow each faculty member to engage in such ...... vertical or horizontal axis, questions are raised, particularly in developing countries, ...

  5. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Toute reproduction, tout stockage dans un système d'extraction ou toute transmission en ... Cameroon: Blind Ambition and the Domino Effect ...... environnementaux ( touchant particulièrement la pollution de l'air et de l'eau ), des règlements ...

  6. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    En excluant ces deux réseaux détruits par la guerre, on comptait 18 réseaux ...... Les experts appelaient cette manière de voir les choses « le paradigme de la ...... notamment à l'occasion de la Journée mondiale du sida, le 1er décembre. ...... le calendrier de disponibilité des groupes cibles a aussi été ignoré, ce qui a ...

  7. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Establish networks of female professionals in science and engineering; enhance ..... In many communities in developing countries — rural and urban — women and men .... SWAG organized four regional “women and environment” assemblies in ..... In a case study in Mbusyani, Kenya, although residents knew a great deal ...

  8. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    More specifically, to what extent do developing countries participate in and ... local, national, regional, and international — as well as facilitating access to the ...... Investment is concentrated in the central part of organizations or in major cities. .... Case studies of problem-solving related to critical development issues may both ...

  9. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    (2000) renchérissent en affirmant que les TIC ont, dans une large mesure, été ...... Dans les communautés d'accueil des projets, des stratégies de contrôle des ... les performances scolaires, de moderniser la gestion des entreprises du secteur ...

  10. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Le Groupe d'étude sur la gestion intégrée des terres a convenu que la gestion des ... de ces pays ne sont pas en mesure de favoriser le développement industriel. .... internationaux et locaux qui déterminent la performance économique. ...... politiques de prix ( contrôle du prix des biens de consommation ou des intrants, etc. ) ...

  11. pdf2xml

    International Development Research Centre (IDRC) Digital Library (Canada)

    Leur croissance a obligé certaines d'entre elles ( Rio de Janeiro, Maceió, Recife ...... gneiss, divers types de migmatites et par des intrusions granitiques isolées. ...... de minéraux micacés ( surtout muscovite et biotite, mais également sericite, ...

  12. XPath Node Selection over Grammar-Compressed Trees

    Directory of Open Access Journals (Sweden)

    Sebastian Maneth

    2013-11-01

    Full Text Available XML document markup is highly repetitive and therefore well compressible using grammar-based compression. Downward, navigational XPath can be executed over grammar-compressed trees in PTIME: the query is translated into an automaton which is executed in one pass over the grammar. This result is well-known and has been mentioned before. Here we present precise bounds on the time complexity of this problem, in terms of big-O notation. For a given grammar and XPath query, we consider three different tasks: (1 to count the number of nodes selected by the query, (2 to materialize the pre-order numbers of the selected nodes, and (3 to serialize the subtrees at the selected nodes.

  13. CLAIM (CLinical Accounting InforMation)--an XML-based data exchange standard for connecting electronic medical record systems to patient accounting systems.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2005-08-01

    With the evolving and diverse electronic medical record (EMR) systems, there appears to be an ever greater need to link EMR systems and patient accounting systems with a standardized data exchange format. To this end, the CLinical Accounting InforMation (CLAIM) data exchange standard was developed. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data among different medical institutions. CLAIM uses eXtensible Markup Language (XML) as a meta-language. The current version, 2.1, inherited the basic structure of MML 2.x and contains two modules including information related to registration, appointment, procedure and charging. CLAIM 2.1 was implemented successfully in Japan in 2001. Consequently, it was confirmed that CLAIM could be used as an effective data exchange format between EMR systems and patient accounting systems.

  14. Clinical map document based on XML (cMDX: document architecture with mapping feature for reporting and analysing prostate cancer in radical prostatectomy specimens

    Directory of Open Access Journals (Sweden)

    Bettendorf Olaf

    2010-11-01

    Full Text Available Abstract Background The pathology report of radical prostatectomy specimens plays an important role in clinical decisions and the prognostic evaluation in Prostate Cancer (PCa. The anatomical schema is a helpful tool to document PCa extension for clinical and research purposes. To achieve electronic documentation and analysis, an appropriate documentation model for anatomical schemas is needed. For this purpose we developed cMDX. Methods The document architecture of cMDX was designed according to Open Packaging Conventions by separating the whole data into template data and patient data. Analogue custom XML elements were considered to harmonize the graphical representation (e.g. tumour extension with the textual data (e.g. histological patterns. The graphical documentation was based on the four-layer visualization model that forms the interaction between different custom XML elements. Sensible personal data were encrypted with a 256-bit cryptographic algorithm to avoid misuse. In order to assess the clinical value, we retrospectively analysed the tumour extension in 255 patients after radical prostatectomy. Results The pathology report with cMDX can represent pathological findings of the prostate in schematic styles. Such reports can be integrated into the hospital information system. "cMDX" documents can be converted into different data formats like text, graphics and PDF. Supplementary tools like cMDX Editor and an analyser tool were implemented. The graphical analysis of 255 prostatectomy specimens showed that PCa were mostly localized in the peripheral zone (Mean: 73% ± 25. 54% of PCa showed a multifocal growth pattern. Conclusions cMDX can be used for routine histopathological reporting of radical prostatectomy specimens and provide data for scientific analysis.

  15. Clinical map document based on XML (cMDX): document architecture with mapping feature for reporting and analysing prostate cancer in radical prostatectomy specimens.

    Science.gov (United States)

    Eminaga, Okyaz; Hinkelammert, Reemt; Semjonow, Axel; Neumann, Joerg; Abbas, Mahmoud; Koepke, Thomas; Bettendorf, Olaf; Eltze, Elke; Dugas, Martin

    2010-11-15

    The pathology report of radical prostatectomy specimens plays an important role in clinical decisions and the prognostic evaluation in Prostate Cancer (PCa). The anatomical schema is a helpful tool to document PCa extension for clinical and research purposes. To achieve electronic documentation and analysis, an appropriate documentation model for anatomical schemas is needed. For this purpose we developed cMDX. The document architecture of cMDX was designed according to Open Packaging Conventions by separating the whole data into template data and patient data. Analogue custom XML elements were considered to harmonize the graphical representation (e.g. tumour extension) with the textual data (e.g. histological patterns). The graphical documentation was based on the four-layer visualization model that forms the interaction between different custom XML elements. Sensible personal data were encrypted with a 256-bit cryptographic algorithm to avoid misuse. In order to assess the clinical value, we retrospectively analysed the tumour extension in 255 patients after radical prostatectomy. The pathology report with cMDX can represent pathological findings of the prostate in schematic styles. Such reports can be integrated into the hospital information system. "cMDX" documents can be converted into different data formats like text, graphics and PDF. Supplementary tools like cMDX Editor and an analyser tool were implemented. The graphical analysis of 255 prostatectomy specimens showed that PCa were mostly localized in the peripheral zone (Mean: 73% ± 25). 54% of PCa showed a multifocal growth pattern. cMDX can be used for routine histopathological reporting of radical prostatectomy specimens and provide data for scientific analysis.

  16. Adaptación de tecnologías Stream y XML a centros de documentación en televisión

    Directory of Open Access Journals (Sweden)

    Pérez Agüera, José Ramón

    2004-12-01

    Full Text Available Potential of media streaming technologies for its use in information broadcasting both in Internet and corporative intranets is presented. To achieve this a definition and statement of scope of media streaming technologies in broadcasting visual and audio information are carried on, both on demand and as direct broadcasting. The importance of technologies in the documentation departments is highlighted as a means of broadcasting among both internal and external users as an economic added value for the enterprise. We will also outline the main lines of evolution of these technologies in combination with XML for the management of audiovisual contents guided by standards.

    Se presenta la potencialidad de la tecnología media streaming para su utilización en la difusión de la información tanto en las intranet corporativas como por medio de la red Internet. Para ello se lleva a cabo la definición y alcance de media streaming como difusión de información visual y sonora, tanto en solicitud bajo demanda como difusión en directo. Se muestra la importancia de la tecnología en el departamento de documentación de las cadenas audiovisuales, tanto para su difusión entre los usuarios internos como para su extensión como activo económico empresarial. También marcaremos las líneas de evolución de esta tecnología en el ámbito documental con su combinación con las tecnologías XML para el tratamiento documental de los contenidos audiovisuales en función de un standar.

  17. An XML transfer schema for exchange of genomic and genetic mapping data: implementation as a web service in a Taverna workflow.

    Science.gov (United States)

    Paterson, Trevor; Law, Andy

    2009-08-14

    Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. We have developed a simple generic XML schema (GenomicMappingData.xsd - GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. The data

  18. An XML transfer schema for exchange of genomic and genetic mapping data: implementation as a web service in a Taverna workflow

    Directory of Open Access Journals (Sweden)

    Law Andy

    2009-08-01

    Full Text Available Abstract Background Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. Results We have developed a simple generic XML schema (GenomicMappingData.xsd – GMD to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps. It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.. The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of

  19. Three-Level Process Specification for Dynamic Service Outsourcing : From Petri Nets to ebXML and WFPDL

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Angelov, S.A.; Ehrig, Hartmut; Reisig, Wolfgang; Rozenberg, Grzegorz; Weber, Herbert

    2003-01-01

    Service outsourcing is the business paradigm, in which an organization has part of its business process performed by a service provider. In dynamic markets, service providers are selected on the fly during process enactment. The cooperation between the parties is specified in a dynamically made

  20. Three-Level Process Specification for Dynamic Service Outsourcing: From Petri Nets to ebXML and WFPDL

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Angelov, S.A.

    2003-01-01

    Service outsourcing is the business paradigm, in which an organization has part of its business process performed by a service provider. In dynamic markets, service providers are selected on the fly during process enactment. The cooperation between the parties is specified in a dynamically made