WorldWideScience

Sample records for document markup languages

  1. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  2. An Introduction to the Extensible Markup Language (XML).

    Science.gov (United States)

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  3. A Leaner, Meaner Markup Language.

    Science.gov (United States)

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  4. An object-oriented approach for harmonization of multimedia markup languages

    Science.gov (United States)

    Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay

    2003-12-01

    An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.

  5. STMML. A markup language for scientific, technical and medical publishing

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available STMML is an XML-based markup language covering many generic aspects of scientific information. It has been developed as a re-usable core for more specific markup languages. It supports data structures, data types, metadata, scientific units and some basic components of scientific narrative. The central means of adding semantic information is through dictionaries. The specification is through an XML Schema which can be used to validate STMML documents or fragments. Many examples of the language are given.

  6. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  7. Extensible Markup Language: How Might It Alter the Software Documentation Process and the Role of the Technical Communicator?

    Science.gov (United States)

    Battalio, John T.

    2002-01-01

    Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…

  8. SuML: A Survey Markup Language for Generalized Survey Encoding

    Science.gov (United States)

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  9. Wanda ML - a markup language for digital annotation

    NARCIS (Netherlands)

    Franke, K.Y.; Guyon, I.; Schomaker, L.R.B.; Vuurpijl, L.G.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  10. Developing a Markup Language for Encoding Graphic Content in Plan Documents

    Science.gov (United States)

    Li, Jinghuan

    2009-01-01

    While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…

  11. LOG2MARKUP: State module to transform a Stata text log into a markup document

    DEFF Research Database (Denmark)

    2016-01-01

    log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...

  12. Descriptive markup languages and the development of digital humanities

    Directory of Open Access Journals (Sweden)

    Boris Bosančić

    2012-11-01

    Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML

  13. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  14. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  15. Experimental Applications of Automatic Test Markup Language (ATML)

    Science.gov (United States)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  16. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    International Nuclear Information System (INIS)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE's Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI

  17. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.

  18. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  19. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  20. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    Directory of Open Access Journals (Sweden)

    Fajrin Azwary

    2016-04-01

    Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML

  1. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2018-03-09

    Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  2. Improving Interoperability by Incorporating UnitsML Into Markup Languages.

    Science.gov (United States)

    Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W

    2010-01-01

    Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.

  3. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 2 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-03-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  4. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 1 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-04-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Release 2 of Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. No design changes have been made to the description of models between Release 1 and Release 2; changes are restricted to the format of annotations, the correction of errata and the addition of clarifications. Other materials and software are available from the SBML project website at http://sbml.org/.

  5. PIML: the Pathogen Information Markup Language.

    Science.gov (United States)

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  6. Development of clinical contents model markup language for electronic health records.

    Science.gov (United States)

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  7. The geometry description markup language

    International Nuclear Information System (INIS)

    Chytracek, R.

    2001-01-01

    Currently, a lot of effort is being put on designing complex detectors. A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier. A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment. However, no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files, source code (C/C++/FORTRAN), to XML and database solutions. The XML (Extensible Markup Language) has proven to provide an interesting approach for describing detector geometries, with several different but incompatible XML-based solutions existing. Therefore, interoperability and geometry data exchange among different frameworks is not possible at present. The author introduces a markup language for geometry descriptions. Its aim is to define a common approach for sharing and exchanging of geometry description data. Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML

  8. Field Data and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Ralf Löwner

    2007-06-01

    Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.

  9. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  10. The development of MML (Medical Markup Language) version 3.0 as a medical document exchange format for HL7 messages.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2004-12-01

    Medical Markup Language (MML), as a set of standards, has been developed over the last 8 years to allow the exchange of medical data between different medical information providers. MML Version 2.21 used XML as a metalanguage and was announced in 1999. In 2001, MML was updated to Version 2.3, which contained 12 modules. The latest version--Version 3.0--is based on the HL7 Clinical Document Architecture (CDA). During the development of this new version, the structure of MML Version 2.3 was analyzed, subdivided into several categories, and redefined so the information defined in MML could be described in HL7 CDA Level One. As a result of this development, it has become possible to exchange MML Version 3.0 medical documents via HL7 messages.

  11. Development of Markup Language for Medical Record Charting: A Charting Language.

    Science.gov (United States)

    Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung

    2015-01-01

    Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.

  12. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    Science.gov (United States)

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  13. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  14. Systems Biology Markup Language (SBML Level 2 Version 5: Structures and Facilities for Model Definitions

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2015-06-01

    Full Text Available Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  15. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  16. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  17. The Behavior Markup Language: Recent Developments and Challenges

    NARCIS (Netherlands)

    Vilhjalmsson, Hannes; Cantelmo, Nathan; Cassell, Justine; Chafai, Nicholas E.; Kipp, Michael; Kopp, Stefan; Mancini, Maurizio; Marsella, Stacy; Marshall, Andrew N.; Pelachaud, Catherine; Ruttkay, Z.M.; Thorisson, Kristinn R.; van Welbergen, H.; van der Werf, Rick J.; Pelachaud, Catherine; Martin, Jean-Claude; Andre, Elisabeth; Collet, Gerard; Karpouzis, Kostas; Pele, Danielle

    2007-01-01

    Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the

  18. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  19. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  20. The Accelerator Markup Language and the Universal Accelerator Parser

    International Nuclear Information System (INIS)

    Sagan, D.; Forster, M.; Cornell U., LNS; Bates, D.A.; LBL, Berkeley; Wolski, A.; Liverpool U.; Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; CERN; Walker, N.J.; DESY; Larrieu, T.; Roblin, Y.; Jefferson Lab; Pelaia, T.; Oak Ridge; Tenenbaum, P.; Woodley, M.; SLAC; Reiche, S.; UCLA

    2006-01-01

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format

  1. Development of the atomic and molecular data markup language for internet data exchange

    International Nuclear Information System (INIS)

    Ralchenko, Yuri; Clark Robert E.H.; Humbert, Denis; Schultz, David R.; Kato, Takako; Rhee, Yong Joo

    2006-01-01

    Accelerated development of the Internet technologies, including those relevant to the atomic and molecular physics, poses new requirements for the proper communication between computers, users and applications. To this end, a new standard for atomic and molecular data exchange that would reflect the recent achievements in this field becomes a necessity. We report here on development of the Atomic and Molecular Data Markup Language (AMDML) that is based on eXtensible Markup Language (XML). The present version of the AMDML Schema covers atomic spectroscopic data as well as the electron-impact collisions. (author)

  2. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  3. The Petri Net Markup Language : concepts, technology, and tools

    NARCIS (Netherlands)

    Billington, J.; Christensen, S.; Hee, van K.M.; Kindler, E.; Kummer, O.; Petrucci, L.; Post, R.D.J.; Stehno, C.; Weber, M.; Aalst, van der W.M.P.; Best, E.

    2003-01-01

    The Petri Net Markup Language (PNML) is an XML-based interchange format for Petri nets. In order to support different versions of Petri nets and, in particular, future versions of Petri nets, PNML allows the definition of Petri net types.Due to this flexibility, PNML is a starting point for a

  4. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  5. ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)

    Science.gov (United States)

    Sailors, R. Matthew

    2001-01-01

    It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)

  6. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  7. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    Science.gov (United States)

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  8. Geospatial Visualization of Scientific Data Through Keyhole Markup Language

    Science.gov (United States)

    Wernecke, J.; Bailey, J. E.

    2008-12-01

    The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.

  9. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  10. A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.

    Science.gov (United States)

    Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J

    2016-06-17

    Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.

  11. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    . Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. Conclusion We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.

  12. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  13. SBRML: a markup language for associating systems biology data with models.

    Science.gov (United States)

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  14. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  15. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  16. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  17. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    OpenAIRE

    Fajrin Azwary; Fatma Indriani; Dodon T. Nugrahadi

    2016-01-01

    Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML). AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering...

  18. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  19. Pathology data integration with eXtensible Markup Language.

    Science.gov (United States)

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  20. Computerization of guidelines: towards a "guideline markup language".

    Science.gov (United States)

    Dart, T; Xu, Y; Chatellier, G; Degoulet, P

    2001-01-01

    Medical decision making is one of the most difficult daily tasks for physicians. Guidelines have been designed to reduce variance between physicians in daily practice, to improve patient outcomes and to control costs. In fact, few physicians use guidelines in daily practice. A way to ease the use of guidelines is to implement computerised guidelines (computer reminders). We present in this paper a method of computerising guidelines. Our objectives were: 1) to propose a generic model that can be instantiated for any specific guidelines; 2) to use eXtensible Markup Language (XML) as a guideline representation language to instantiate the generic model for a specific guideline. Our model is an object representation of a clinical algorithm, it has been validated by running two different guidelines issued by a French official Agency. In spite of some limitations, we found that this model is expressive enough to represent complex guidelines devoted to diabetes and hypertension management. We conclude that XML can be used as a description format to structure guidelines and as an interface between paper-based guidelines and computer applications.

  1. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    Science.gov (United States)

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  3. Earth Science Markup Language: Transitioning From Design to Application

    Science.gov (United States)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  4. On the Power of Fuzzy Markup Language

    CERN Document Server

    Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui

    2013-01-01

    One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible;  consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here.  The concept of a Fuzzy Control that is conceived and modeled in terms...

  5. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  6. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    Science.gov (United States)

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  7. Automatically Generating a Distributed 3D Battlespace Using USMTF and XML-MTF Air Tasking Order, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  8. The basics of CrossRef extensible markup language

    Directory of Open Access Journals (Sweden)

    Rachael Lammey

    2014-08-01

    Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.

  9. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    Science.gov (United States)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  10. Automatically Generating a Distributed 3D Virtual Battlespace Using USMTF and XML-MTF Air Tasking Orders, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  11. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  12. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  13. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  14. Root system markup language: toward a unified root architecture description language.

    Science.gov (United States)

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.

  15. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  16. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  17. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development

    Science.gov (United States)

    Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-01-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259

  18. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development.

    Science.gov (United States)

    Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-06-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.

  19. Interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-07-01

    Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.

  20. XML Schema Languages: Beyond DTD.

    Science.gov (United States)

    Ioannides, Demetrios

    2000-01-01

    Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)

  1. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  2. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  3. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  4. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  5. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  6. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  7. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  8. Histoimmunogenetics Markup Language 1.0: Reporting next generation sequencing-based HLA and KIR genotyping.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2015-12-01

    We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Semantic Markup for Literary Scholars: How Descriptive Markup Affects the Study and Teaching of Literature.

    Science.gov (United States)

    Campbell, D. Grant

    2002-01-01

    Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)

  10. Markups and Exporting Behavior

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic Michel Patrick

    2012-01-01

    In this paper, we develop a method to estimate markups using plant-level production data. Our approach relies on cost minimizing producers and the existence of at least one variable input of production. The suggested empirical framework relies on the estimation of a production function and provides...... estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge......, on average, higher markups and that markups increase upon export entry....

  11. XML: Ejemplos de uso (presentación)

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  12. XML: Ejemplos de uso

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  13. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    Science.gov (United States)

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  14. Data on the interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-09-01

    The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  15. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  16. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    Science.gov (United States)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  17. From paper to digital documents : Challenging and improving the SGML approach

    OpenAIRE

    Sandahl, Tone Irene

    1999-01-01

    This research has been initiated on the basis of practical experiences in developing a relatively large SGML system at the University of Oslo. This thesis contributes to the field of information systems, with a particular focus on document systems. The aim of this work is to inform the design of document systems by considering the transformation from paper to digital documents in organizations. The Standard Generalized Markup Language (SGML, ISO 8879) approach is emphasized. The SGML approach...

  18. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  19. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  20. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  1. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  2. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  3. Document Categorization with Modified Statistical Language Models for Agglutinative Languages

    Directory of Open Access Journals (Sweden)

    Tantug

    2010-11-01

    Full Text Available In this paper, we investigate the document categorization task with statistical language models. Our study mainly focuses on categorization of documents in agglutinative languages. Due to the productive morphology of agglutinative languages, the number of word forms encountered in naturally occurring text is very large. From the language modeling perspective, a large vocabulary results in serious data sparseness problems. In order to cope with this drawback, previous studies in various application areas suggest modified language models based on different morphological units. It is reported that performance improvements can be achieved with these modified language models. In our document categorization experiments, we use standard word form based language models as well as other modified language models based on root words, root words and part-of-speech information, truncated word forms and character sequences. Additionally, to find an optimum parameter set, multiple tests are carried out with different language model orders and smoothing methods. Similar to previous studies on other tasks, our experimental results on categorization of Turkish documents reveal that applying linguistic preprocessing steps for language modeling provides improvements over standard language models to some extent. However, it is also observed that similar level of performance improvements can also be acquired by simpler character level or truncated word form models which are language independent.

  4. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  5. Securing XML Documents

    Directory of Open Access Journals (Sweden)

    Charles Shoniregun

    2004-11-01

    Full Text Available XML (extensible markup language is becoming the current standard for establishing interoperability on the Web. XML data are self-descriptive and syntax-extensible; this makes it very suitable for representation and exchange of semi-structured data, and allows users to define new elements for their specific applications. As a result, the number of documents incorporating this standard is continuously increasing over the Web. The processing of XML documents may require a traversal of all document structure and therefore, the cost could be very high. A strong demand for a means of efficient and effective XML processing has posed a new challenge for the database world. This paper discusses a fast and efficient indexing technique for XML documents, and introduces the XML graph numbering scheme. It can be used for indexing and securing graph structure of XML documents. This technique provides an efficient method to speed up XML data processing. Furthermore, the paper explores the classification of existing methods impact of query processing, and indexing.

  6. A two-way interface between limited Systems Biology Markup Language and R

    Directory of Open Access Journals (Sweden)

    Radivoyevitch Tomas

    2004-12-01

    Full Text Available Abstract Background Systems Biology Markup Language (SBML is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML( which maps this R model structure to SBML level 2, and read.SBML( which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  7. A two-way interface between limited Systems Biology Markup Language and R.

    Science.gov (United States)

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  8. A New Method of Viewing Attachment Document of eMail on Various Mobile Devices

    Science.gov (United States)

    Ko, Heeae; Seo, Changwoo; Lim, Yonghwan

    As the computing power of the mobile devices is improving rapidly, many kinds of web services are also available in mobile devices just as Email service. Mobile Mail Service began early, but this service is mostly limited in some specified mobile devices such as Smart Phone. That is a limitation that users have to purchase specified phone to be benefited from Mobile Mail Service. In this paper, it uses DIDL (digital item declaration language) markup type defined in MPEG-21 and MobileGate Server, and solved this problem. DIDL could be converted to other markup types which are displayed by mobile devices. By transforming PC Web Mail contents including attachment document to DIDL markup through MobileGate Server, the Mobile Mail Service could be available for all kinds of mobile devices.

  9. XML schemas and mark-up practices of taxonomic literature.

    Science.gov (United States)

    Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat

    2011-01-01

    We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

  10. Treatment of Markup in Statistical Machine Translation

    OpenAIRE

    Müller, Mathias

    2017-01-01

    We present work on handling XML markup in Statistical Machine Translation (SMT). The methods we propose can be used to effectively preserve markup (for instance inline formatting or structure) and to place markup correctly in a machine-translated segment. We evaluate our approaches with parallel data that naturally contains markup or where markup was inserted to create synthetic examples. In our experiments, hybrid reinsertion has proven the most accurate method to handle markup, while alignm...

  11. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...... see these results as a first step in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  12. Changes in latent fingerprint examiners' markup between analysis and comparison.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%). Published by Elsevier Ireland Ltd.

  13. 47 CFR 1.355 - Documents in foreign language.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Documents in foreign language. 1.355 Section 1... Proceedings Evidence § 1.355 Documents in foreign language. Every document, exhibit, or other paper written in a language other than English, which shall be filed in any proceeding, or in response to any order...

  14. Gstruct: a system for extracting schemas from GML documents

    Science.gov (United States)

    Chen, Hui; Zhu, Fubao; Guan, Jihong; Zhou, Shuigeng

    2008-10-01

    Geography Markup Language (GML) becomes the de facto standard for geographic information representation on the internet. GML schema provides a way to define the structure, content, and semantic of GML documents. It contains useful structural information of GML documents and plays an important role in storing, querying and analyzing GML data. However, GML schema is not mandatory, and it is common that a GML document contains no schema. In this paper, we present Gstruct, a tool for GML schema extraction. Gstruct finds the features in the input GML documents, identifies geometry datatypes as well as simple datatypes, then integrates all these features and eliminates improper components to output the optimal schema. Experiments demonstrate that Gstruct is effective in extracting semantically meaningful schemas from GML documents.

  15. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  16. Endangered Language Documentation and Transmission

    Directory of Open Access Journals (Sweden)

    D. Victoria Rau

    2007-01-01

    Full Text Available This paper describes an on-going project on digital archiving Yami language documentation (http://www.hrelp.org/grants/projects/index.php?projid=60. We present a cross-disciplinary approach, involving computer science and applied linguistics, to document the Yami language and prepare teaching materials. Our discussion begins with an introduction to an integrated framework for archiving, processing and developing learning materials for Yami (Yang and Rau 2005, followed by a historical account of Yami language teaching, from a grammatical syllabus (Dong and Rau 2000b to a communicative syllabus using a multimedia CD as a resource (Rau et al. 2005, to the development of interactive on-line learning based on the digital archiving project. We discuss the methods used and challenges of each stage of preparing Yami teaching materials, and present a proposal for rethinking pedagogical models for e-learning.

  17. Systematic reconstruction of TRANSPATH data into cell system markup language.

    Science.gov (United States)

    Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru

    2008-06-23

    Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.

  18. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    Science.gov (United States)

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  19. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  20. The Systems Biology Markup Language (SBML Level 3 Package: Layout, Version 1 Core

    Directory of Open Access Journals (Sweden)

    Gauges Ralph

    2015-06-01

    Full Text Available Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on. For software tools that also read and write models in SBML (Systems Biology Markup Language format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded.

  1. Hospital markup and operation outcomes in the United States.

    Science.gov (United States)

    Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M

    2016-07-01

    Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Endogenous Markups, Firm Productivity and International Trade:

    DEFF Research Database (Denmark)

    Bellone, Flora; Musso, Patrick; Nesta, Lionel

    ) markups are positively related to firm productivity; 3) markups are negatively related to import penetration; 4) markups are positively related to firm export intensity and markups are higher on the export market than on the domestic ones in the presence of trade barriers and/or if competitors...... on the export market are less efficient than competitors on the domestic market. We estimate micro-level price cost margins (PCMs) using firm-level data extending the techniques developed by Hall (1986, 1988) and extended by Domowitz et al. (1988) and Roeger (1995) for the French manufacturing industry from......In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2...

  3. Wine Price Markup in California Restaurants

    OpenAIRE

    Amspacher, William

    2011-01-01

    The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.

  4. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  5. Decision-cache based XACML authorisation and anonymisation for XML documents

    OpenAIRE

    Ulltveit-Moe, Nils; Oleshchuk, Vladimir A

    2012-01-01

    Author's version of an article in the journal: Computer Standards and Interfaces. Also available from the publisher at: http://dx.doi.org/10.1016/j.csi.2011.10.007 This paper describes a decision cache for the eXtensible Access Control Markup Language (XACML) that supports fine-grained authorisation and anonymisation of XML based messages and documents down to XML attribute and element level. The decision cache is implemented as an XACML obligation service, where a specification of the XML...

  6. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    Science.gov (United States)

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  7. XML and E-Journals: The State of Play.

    Science.gov (United States)

    Wusteman, Judith

    2003-01-01

    Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…

  8. GlottoVis : Visualizing Language Endangerment and Documentation

    NARCIS (Netherlands)

    Castermans, T.H.A.; Speckmann, B.; Verbeek, K.A.B.; Westenberg, M.A.; Hammarström, H.

    2017-01-01

    We present GlottoVis, a system designed to visualize language endangerment as collected by UNESCO and descriptive status as collected by the Glottolog project. Glottolog records bibliographic data for the world’s (lesser known) languages. Languages are documented with increasing detail, but the

  9. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  10. Language Ideology or Language Practice? An Analysis of Language Policy Documents at Swedish Universities

    Science.gov (United States)

    Björkman, Beyza

    2014-01-01

    This article presents an analysis and interpretation of language policy documents from eight Swedish universities with regard to intertextuality, authorship and content analysis of the notions of language practices and English as a lingua franca (ELF). The analysis is then linked to Spolsky's framework of language policy, namely language…

  11. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  12. XML/TEI Stand-off Markup. One step beyond.

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena

    2018-01-01

    Stand-off markup is widely considered as a possible solution for overcoming the limitation of inline XML markup, primarily dealing with multiple overlapping hierarchies. Considering previous contributions on the subject and implementations of stand-off markup, we propose a new TEI-based model for

  13. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  14. Principles of reusability of XML-based enterprise documents

    Directory of Open Access Journals (Sweden)

    Roman Malo

    2010-01-01

    Full Text Available XML (Extensible Markup Language represents one of flexible platforms for processing enterprise documents. Its simple syntax and powerful software infrastructure for processing this type of documents is a guarantee for high interoperability of individual documents. XML is today one of technologies influencing all aspects of ICT area.In the paper questions and basic principles of reusing XML-based documents are described in the field of enterprise documents. If we use XML databases or XML data types for storing these types of documents then partial redundancy could be expected due to possible documents’ similarity. This similarity can be found especially in documents’ structure and also in documents’ content and its elimination is necessary part of data optimization.The main idea of the paper is focused to possibilities how to think about dividing complex XML docu­ments into independent fragments that can be used as standalone documents and how to process them.Conclusions could be applied within software tools working with XML-based structured data and documents as document management systems or content management systems.

  15. Language Documentation and Sociolinguistics | Iwuchukwu | Lwati ...

    African Journals Online (AJOL)

    The position of this paper is that one of the criteria of a genuine language documentation project is that it must represent the language as it is used e.g. the breaking of kola or pouring of libation by the Igbo, the naming ceremony, new yam festival, burial and marriage ceremonies in Bekwarra, Lokaa, Ibibio, Efik, Yoruba, ...

  16. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  17. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  18. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    2013-05-24

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XML files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.

  19. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    Science.gov (United States)

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  20. 17 CFR 232.306 - Foreign language documents and symbols.

    Science.gov (United States)

    2010-04-01

    ... symbols. 232.306 Section 232.306 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION... § 232.306 Foreign language documents and symbols. (a) All electronic filings and submissions must be in... words or letters in the English language rather than representative symbols, except that HTML documents...

  1. The DSD Schema Language

    DEFF Research Database (Denmark)

    Klarlund, Nils; Møller, Anders; Schwartzbach, Michael Ignatieff

    2002-01-01

    be specified as a set of trees. For example, XHTML is a user domain corresponding to a set of XML documents that make sense as hypertext. A notation for defining such a set of XML trees is called a schema language. We believe that a useful schema notation must identify most of the syntactic requirements......XML (Extensible Markup Language), a linear syntax for trees, has gathered a remarkable amount of interest in industry. The acceptance of XML opens new venues for the application of formal methods such as specification of abstract syntax tree sets and tree transformations. A user domain may...... on tree nodes depend on their context. We also support a general, declarative mechanism for inserting default elements and attributes. Also, we include a simple technique for reusing and evolving DSDs through selective redefinitions. The expressiveness of DSD is comparable to that of the schema language...

  2. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied in application program with expectation that the application program can give: (1 alternative to decide percentage markup for user, (2 Illustration of gross profit estimation that will be achieve for selected percentage markup, (3 Illustration of estimation percentage of the units sold that will be achieve for selected percentage markup, and (4 Illustration of total net income before tax will get for specific period. Abstract in Bahasa Indonesia : Penelitian ini bertujuan untuk merancang model Matematis guna menetapkan volume penjualan, sebagai alternatif untuk menentukan persentase markup harga jual produk. Model Matematis dirancang menggunakan Statistik Regresi Berganda. Volume penjualan merupakan fungsi dari variabel markup, kondisi pasar, dan kondisi pengganti. Model Matematis yang dirancang sudah memenuhi uji: asumsi atas error, akurasi model, validasi model, dan masalah multikolinearitas. Rancangan model Matematis tersebut diterapkan dalam program aplikasi dengan harapan dapat memberi: (1 alternatif bagi pengguna mengenai berapa besar markup yang sebaiknya ditetapkan, (2 gambaran perkiraan laba kotor yang akan diperoleh setiap pemilihan markup, (3 gambaran perkiraan persentase unit yang terjual setiap pemilihan markup, dan (4 gambaran total laba kotor sebelum pajak yang dapat diperoleh pada periode yang bersangkutan. Kata kunci: model Matematis, aplikasi program, volume penjualan, markup, laba kotor.

  3. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  4. First Steps to Endangered Language Documentation: The Kalasha Language, a Case Study

    Science.gov (United States)

    Mela-Athanasopoulou, Elizabeth

    2011-01-01

    The present paper based on extensive fieldwork D conducted on Kalasha, an endangered language spoken in the three small valleys in Chitral District of Northwestern Pakistan, exposes a spontaneous dialogue-based elicitation of linguistic material used for the description and documentation of the language. After a brief display of the basic typology…

  5. Variation in markup of general surgical procedures by hospital market concentration.

    Science.gov (United States)

    Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M

    2018-04-01

    Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. OFFICIAL DOCUMENTS RELATING TO PORTUGUESE LANGUAGE TEACHING, INTERCULTURALITY AND LITERACY POLICY

    Directory of Open Access Journals (Sweden)

    Cloris Porto Torquato

    2016-06-01

    Full Text Available The present article analyzes two documents Parâmetros Curriculares Nacionais – Língua Portuguesa (BRASIL, 1998 and Parâmetros Curriculares Nacionais – Temas Transversais – Pluralidade Cultural (BRASIL, 1998b, conceiving these documents as constituents of language policies (RICENTO, 2006; SHOHAMY, 2006 and literacy policies, and it focuses the intercultural dialogues/conflicts that these documents promote when guiding that the teaching of the language should have as main object the text and indicating which genres should be privileged. Thereby, the text deals with language policies, more specifically focusing in literacy policies (bringing to bear the concept of literacy formulated by the New Literacy Studies (STREET, 1984, 1993, 2003; BARTON; HAMILTON, 1998; SIGNORINI, 2001 and interculturality (JANZEN, 2005. The analysis of the documents is undertaken to the light of the bakhtinian conception of language and it mobilizes the following concepts of the Circle of Bakhtin: dialogism, utterance and genres of speech. Furthermore, this text is based methodologically on the orientations of the authors of this Circle for the study of the language (BAKHTIN/ VOLOSHINOV, 1986; BAKHTIN, 2003. The analysis indicates that the official documents, when promoting literacy policies, also promote intercultural conflicts, because they privilege the dominant literacies, silencing other literacy practices. We understood that this silencing and invalidating local literacy practices has implications for the constitutions of the students’ identities and local language policies.

  7. From Documenting to Revitalizing an Endangered Language: Where Do Applied Linguists Fit?

    Science.gov (United States)

    Penfield, Susan D.; Tucker, Benjamin V.

    2011-01-01

    This paper explores the distance between documenting and revitalizing endangered languages and indicates critical points at which applied linguistics can play a role. We look at language documentation, language revitalization and their relationship. We then provide some examples from our own work. We see the lack of applied linguistics as a…

  8. GIBS Keyhole Markup Language (KML)

    Data.gov (United States)

    National Aeronautics and Space Administration — The KML documentation standard provides a solution for imagery integration into mapping tools that utilize support the KML standard, specifically Google Earth. Using...

  9. Applying language technology to nursing documents: pros and cons with a focus on ethics.

    Science.gov (United States)

    Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salakoski, Tapio; Salanterä, Sanna

    2007-10-01

    The present study discusses ethics in building and using applications based on natural language processing in electronic nursing documentation. Specifically, we first focus on the question of how patient confidentiality can be ensured in developing language technology for the nursing documentation domain. Then, we identify and theoretically analyze the ethical outcomes which arise when using natural language processing to support clinical judgement and decision-making. In total, we put forward and justify 10 claims related to ethics in applying language technology to nursing documents. A review of recent scientific articles related to ethics in electronic patient records or in the utilization of large databases was conducted. Then, the results were compared with ethical guidelines for nurses and the Finnish legislation covering health care and processing of personal data. Finally, the practical experiences of the authors in applying the methods of natural language processing to nursing documents were appended. Patient records supplemented with natural language processing capabilities may help nurses give better, more efficient and more individualized care for their patients. In addition, language technology may facilitate patients' possibility to receive truthful information about their health and improve the nature of narratives. Because of these benefits, research about the use of language technology in narratives should be encouraged. In contrast, privacy-sensitive health care documentation brings specific ethical concerns and difficulties to the natural language processing of nursing documents. Therefore, when developing natural language processing tools, patient confidentiality must be ensured. While using the tools, health care personnel should always be responsible for the clinical judgement and decision-making. One should also consider that the use of language technology in nursing narratives may threaten patients' rights by using documentation collected

  10. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    Science.gov (United States)

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  11. XML and its impact on content and structure in electronic health care documents.

    Science.gov (United States)

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  12. The Long-Run Relationship Between Inflation and the Markup in the U.S.

    OpenAIRE

    Sandeep Mazumder

    2011-01-01

    This paper examines the long-run relationship between inflation and a new measure of the price-marginal cost markup. This new markup index is derived while accounting for labor adjustment costs, which a large number of the papers that estimate the markup have ignored. We then examine the long-run relationship between this markup measure, which is estimated using U.S. manufacturing data, and inflation. We find that decreases in the markup that are associated with a percentage point increase in...

  13. Beyond the Ancestral Code: Towards a Model for Sociolinguistic Language Documentation

    Science.gov (United States)

    Childs, Tucker; Good, Jeff; Mitchell, Alice

    2014-01-01

    Most language documentation efforts focus on capturing lexico-grammatical information on individual languages. Comparatively little effort has been devoted to considering a language's sociolinguistic contexts. In parts of the world characterized by high degrees of multilingualism, questions surrounding the factors involved in language choice and…

  14. Integrating Language Documentation, Language Preservation, and Linguistic Research: Working with the Kokamas from the Amazon

    Science.gov (United States)

    Vallejos, Rosa

    2014-01-01

    This paper highlights the role of speech community members on a series of interconnected projects to document, study and maintain Kokama, a deeply endangered language from the Peruvian Amazon. The remaining fluent speakers of the language are mostly older than 60 years of age, are spread out across various small villages, and speak the language in…

  15. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan.

    Science.gov (United States)

    Waning, Brenda; Maddix, Jason; Soucy, Lyne

    2010-07-13

    Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative

  16. Markup heterogeneity, export status ans the establishment of the euro

    OpenAIRE

    Guillou , Sarah; Nesta , Lionel

    2015-01-01

    We investigate the effects of the establishment of the euro on the markups of French manufacturing firms. Merging firm-level census data with customs data, we estimate time-varying firm-specific markups and distinguish between eurozone exporters from other firms between 1995 and 2007. We find that the establishment of the euro has had a pronounced pro-competitive impact by reducing firm markups by 14 percentage points. By reducing export costs, the euro represented an opp...

  17. Markup cyclicality, employment adjustment, and financial constraints

    OpenAIRE

    Askildsen, Jan Erik; Nilsen, Øivind Anti

    2001-01-01

    We investigate the existence of markups and their cyclical behaviour. Markup is not directly observed. Instead, it is given as a price-cost relation that is estimated from a dynamic model of the firm. The model incorporates potential costly employment adjustments and takes into consideration that firms may be financially constrained. When considering size of the future labour stock, financially constrained firms may behave as if they have a higher discount factor, which may affect the realise...

  18. A Typed Text Retrieval Query Language for XML Documents.

    Science.gov (United States)

    Colazzo, Dario; Sartiani, Carlo; Albano, Antonio; Manghi, Paolo; Ghelli, Giorgio; Lini, Luca; Paoli, Michele

    2002-01-01

    Discussion of XML focuses on a description of Tequyla-TX, a typed text retrieval query language for XML documents that can search on both content and structures. Highlights include motivations; numerous examples; word-based and char-based searches; tag-dependent full-text searches; text normalization; query algebra; data models and term language;…

  19. Free Trade Agreements and Firm-Product Markups in Chilean Manufacturing

    DEFF Research Database (Denmark)

    Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick

    In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US....... The dataset provides information about the value and quantity of each good produced by the firm, as well as the amount of exports. One additional and unique characteristic of our dataset is that it provides a firm-product level measure of the unit average cost. We use this information to compute a firm...

  20. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  1. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Maddix Jason

    2010-07-01

    Full Text Available Abstract Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007. Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals

  2. Are the determinants of markup size industry-specific? The case of Slovenian manufacturing firms

    Directory of Open Access Journals (Sweden)

    Ponikvar Nina

    2011-01-01

    Full Text Available The aim of this paper is to identify factors that affect the pricing policy in Slovenian manufacturing firms in terms of the markup size and, most of all, to explicitly account for the possibility of differences in pricing procedures among manufacturing industries. Accordingly, the analysis of the dynamic panel is carried out on an industry-by-industry basis, allowing the coefficients on the markup determinants to vary across industries. We find that the oligopoly theory of markup determination for the most part holds for the manufacturing sector as a whole, although large variability in markup determinants exists across industries within the Slovenian manufacturing. Our main conclusion is that each industry should be investigated separately in detail in order to assess the precise role of markup factors in the markup-determination process.

  3. The Commercial Office Market and the Markup for Full Service Leases

    OpenAIRE

    Jonathan A. Wiley; Yu Liu; Dongshin Kim; Tom Springer

    2014-01-01

    Because landlords assume all of the operating expense risk, rents for gross leases exceed those for net leases. The markup, or spread, for gross leases varies between properties and across markets. Specifically, the markup is expected to increase with the cost of real estate services at the property, and to be influenced by market conditions. A matching procedure is applied to measure the services markup as the percentage difference between the actual rent on a gross lease relative to the act...

  4. Trade reforms, mark-ups and bargaining power of workers: the case ...

    African Journals Online (AJOL)

    Ethiopian Journal of Economics ... workers between 1996 and 2007, a model of mark-up with labor bargaining power was estimated using random effects and LDPDM. ... Keywords: Trade reform, mark-up, bargaining power, rent, trade unions ...

  5. Development of an event-driven parser for active document and web-based nuclear design system

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yong Soo

    2005-02-15

    Nuclear design works consist of extensive unit job modules in which many computer codes are used. Each unit module requires time-consuming and erroneous input preparation, code run, output analysis and quality assurance process. The task for safety evaluation of reload core is especially the most man-power intensive and time-consuming due to the large amount of calculations and data exchanges. The purpose of this study is to develop a new nuclear design system called Innovative Design Processor (IDP) in order to minimize human effort and maximize design quality and productivity, and then to achieve an ultimately optimized core loading pattern. Two new basic principles of IDP are the document-oriented design and the web based design. Contrary to the conventional code-oriented or procedure-oriented design, the document-oriented design is human-oriented in that the final document is automatically prepared with complete analysis, table and plots, if the designer writes a design document called active document and feeds it to a parser. This study defined a number of active components and developed an event-driven parser for the active document in HTML (Hypertext Markup Language) or XML (Extensible Markup Language). The active documents can be created on the web, which is another framework of IDP. Using proper mix-up of server side and client side programming under the HAMP (HP-UX/Apache/MySQL/PHP) environment, the document-oriented design process on the web is modeled as a design wizard for designer's convenience and platform independency. This automation using IDP was tested for the reload safety evaluation of Korea Standard Nuclear Power Plant (KSNP) type PWRs. Great time saving was confirmed and IDP can complete several-month jobs in a few days. More optimized core loading pattern, therefore, can be obtained since it takes little time to do the reload safety evaluation tasks with several core loading pattern candidates. Since the technology is also applicable to

  6. Development of an event-driven parser for active document and web-based nuclear design system

    International Nuclear Information System (INIS)

    Park, Yong Soo

    2005-02-01

    Nuclear design works consist of extensive unit job modules in which many computer codes are used. Each unit module requires time-consuming and erroneous input preparation, code run, output analysis and quality assurance process. The task for safety evaluation of reload core is especially the most man-power intensive and time-consuming due to the large amount of calculations and data exchanges. The purpose of this study is to develop a new nuclear design system called Innovative Design Processor (IDP) in order to minimize human effort and maximize design quality and productivity, and then to achieve an ultimately optimized core loading pattern. Two new basic principles of IDP are the document-oriented design and the web based design. Contrary to the conventional code-oriented or procedure-oriented design, the document-oriented design is human-oriented in that the final document is automatically prepared with complete analysis, table and plots, if the designer writes a design document called active document and feeds it to a parser. This study defined a number of active components and developed an event-driven parser for the active document in HTML (Hypertext Markup Language) or XML (Extensible Markup Language). The active documents can be created on the web, which is another framework of IDP. Using proper mix-up of server side and client side programming under the HAMP (HP-UX/Apache/MySQL/PHP) environment, the document-oriented design process on the web is modeled as a design wizard for designer's convenience and platform independency. This automation using IDP was tested for the reload safety evaluation of Korea Standard Nuclear Power Plant (KSNP) type PWRs. Great time saving was confirmed and IDP can complete several-month jobs in a few days. More optimized core loading pattern, therefore, can be obtained since it takes little time to do the reload safety evaluation tasks with several core loading pattern candidates. Since the technology is also applicable to the

  7. SELECTION OF ONTOLOGY FOR WEB SERVICE DESCRIPTION LANGUAGE TO ONTOLOGY WEB LANGUAGE CONVERSION

    OpenAIRE

    J. Mannar Mannan; M. Sundarambal; S. Raghul

    2014-01-01

    Semantic web is to extend the current human readable web to encoding some of the semantic of resources in a machine processing form. As a Semantic web component, Semantic Web Services (SWS) uses a mark-up that makes the data into detailed and sophisticated machine readable way. One such language is Ontology Web Language (OWL). Existing conventional web service annotation can be changed to semantic web service by mapping Web Service Description Language (WSDL) with the semantic annotation of O...

  8. Planned growth as a determinant of the markup: the case of Slovenian manufacturing

    Directory of Open Access Journals (Sweden)

    Maks Tajnikar

    2009-11-01

    Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.

  9. The Price-Marginal Cost Markup and its Determinants in U.S. Manufacturing

    OpenAIRE

    Mazumder, Sandeep

    2009-01-01

    This paper estimates the price-marginal cost markup for US manufacturing using a new methodology. Most existing techniques of estimating the markup are a variant on Hall's (1988) framework involving the manipulation of the Solow Residual. However this paper argues that this notion is based on the unreasonable assumption that labor can be costlessly adjusted at a fixed wage rate. By relaxing this assumption, we are able to derive a generalized markup index, which when estimated using manufactu...

  10. An Attempt to Construct a Database of Photographic Data of Radiolarian Fossils with the Hypertext Markup Language

    OpenAIRE

    磯貝, 芳徳; 水谷, 伸治郎; Yoshinori, Isogai; Shinjiro, Mizutani

    1998-01-01

    放散虫化石の走査型電子顕微鏡写真のコレクションを,Hypertext Markup Languageを用いてデータベース化した.このデータベースは約千枚の放散虫化石の写真を現時点でもっており,化石名,地質学的年代,発掘地名など多様な視点から検索することができる.このデータベースの構築によって,計算機やデータベースについて特別な技術を持っていない通常の研究者が,自身のデータベースを自らの手で構築しようとするとき,Hypertext Markup Languageが有効であることを示した.さらにインターネットを経由して,誰でもこのデータベースを利用できる点は,Hypertext Markup Languageを用いたデータベースの特筆するき特徴である.データベース構築の過程を記述し,現況を報告する.さらに当データベース構築の背景にある考えや問題点について議論する....

  11. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  12. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  13. XForms 2.0

    NARCIS (Netherlands)

    S. Pemberton (Steven); J.M. Boyer; L.L. Klotz; N. van den Bleeken

    2012-01-01

    htmlabstractXForms is an XML markup for a new generation of forms and form-like applications on the Web. XForms is not a free-standing document type, but is integrated into other markup languages, such as [XHTML], [ODF] or [SVG]. An XForms-based application gathers and processes data using an

  14. Original Dataset - dbQSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available English ]; } else { document.getElementById(lang).innerHTML= '[ Japanese | English ]'; } } window.onload = switchLanguage... included - the subdirectory old/ is not included - pml/ includes data in PML (Polymorphism Markup Language)

  15. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    OpenAIRE

    Oviliani Yenty Yuliana; Yohan Wahyudi; Siana Halim

    2002-01-01

    The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied i...

  16. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    Science.gov (United States)

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  17. Creation of structured documentation templates using Natural Language Processing techniques.

    Science.gov (United States)

    Kashyap, Vipul; Turchin, Alexander; Morin, Laura; Chang, Frank; Li, Qi; Hongsermeier, Tonya

    2006-01-01

    Structured Clinical Documentation is a fundamental component of the healthcare enterprise, linking both clinical (e.g., electronic health record, clinical decision support) and administrative functions (e.g., evaluation and management coding, billing). One of the challenges in creating good quality documentation templates has been the inability to address specialized clinical disciplines and adapt to local clinical practices. A one-size-fits-all approach leads to poor adoption and inefficiencies in the documentation process. On the other hand, the cost associated with manual generation of documentation templates is significant. Consequently there is a need for at least partial automation of the template generation process. We propose an approach and methodology for the creation of structured documentation templates for diabetes using Natural Language Processing (NLP).

  18. Monopoly, Pareto and Ramsey mark-ups

    NARCIS (Netherlands)

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear

  19. PENGUKURAN KINERJA BEBERAPA SISTEM BASIS DATA RELASIONAL DENGAN KEMAMPUAN MENYIMPAN DATA BERFORMAT GML (GEOGRAPHY MARKUP LANGUAGE YANG DAPAT DIGUNAKAN UNTUK MENDASARI APLIKASI-APLIKASI SISTEM INFORMASI GEOGRAFIS

    Directory of Open Access Journals (Sweden)

    Adi Nugroho

    2009-01-01

    Full Text Available If we want to represent spatial data to user using GIS (Geographical Information System applications, we have 2 choices about the underlying database, that is general RDBMS (Relational Database Management System for saving general spatial data (number, char, varchar, etc., or saving spatial data in GML (Geography Markup Language format. (GML is an another XML’s special vocabulary for spatial data. If we choose GML for saving spatial data, we also have 2 choices, that is saving spatial data in XML Enabled Database (relational databases that can be use for saving XML data or we can use Native XML Database (NXD, that is special databases that can be use for saving XML data. In this paper, we try to make performance comparison for several XML Enabled Database when we do GML’s CRUD (Create-Read-Update-Delete operations to these databases. On the other side, we also want to see flexibility of XML Enabled Database from programmers view.

  20. HTEL: a HyperText Expression Language

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jørgen

    1999-01-01

    been submitted.A special tool has been used to build the HTEL-interpreter, as an example belonging a family of interpreters for domain specific languages. Members of that family have characteristics that are closely related to structural patterns found in the mark-ups of HTML. HTEL should also be seen...

  1. Castles Made of Sand: Building Sustainable Digitized Collections Using XML.

    Science.gov (United States)

    Ragon, Bart

    2003-01-01

    Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)

  2. What Constitutes Successful Format Conversion? Towards a Formalization of 'Intellectual Content'

    Directory of Open Access Journals (Sweden)

    C. M. Sperberg-McQueen

    2011-03-01

    Full Text Available Recent work in the semantics of markup languages may offer a way to achieve more reliable results for format conversion, or at least a way to state the goal more explicitly. In the work discussed, the meaning of markup in a document is taken as the set of things accepted as true because of the markup's presence, or equivalently, as the set of inferences licensed by the markup in the document. It is possible, in principle, to apply a general semantic description of a markup vocabulary to documents encoded using that vocabulary and to generate a set of inferences (typically rather large, but finite as a result. An ideal format conversion translating a digital object from one vocabulary to another, then, can be characterized as one which neither adds nor drops any licensed inferences; it is possible to check this equivalence explicitly for a given conversion of a digital object, and possible in principle (although probably beyond current capabilities in practice to prove that a given transformation will, if given valid and semantically correct input, always produce output that is semantically equivalent to its input. This approach is directly applicable to the XML formats frequently used for scientific and other data, but it is also easily generalized from SGML/XML-based markup languages to digital formats in general; at a high level, it is equally applicable to document markup, to database exchanges, and to ad hoc formats for high-volume scientific data.Some obvious complications and technical difficulties arising from this approach are discussed, as are some important implications. In most real-world format conversions, the source and target formats differ at least somewhat in their ontology, either in the level of detail they cover or in the way they carve reality into classes; it is thus desirable not only to define what a perfect format conversion looks like, but to quantify the loss or distortion of information resulting from the conversion.

  3. ON CURRICULAR PROPOSALS OF THE PORTUGUESE LANGUAGE: A DOCUMENT ANALYSIS IN JUIZ DE FORA (MG

    Directory of Open Access Journals (Sweden)

    Tânia Guedes MAGALHÃES

    2014-12-01

    Full Text Available This paper, whose objective is to analyze two curricular proposals of Portuguese from the Juiz de Fora City Hall (2001 and 2012, is an extract from a research entitled “On text genres and teaching: a collaborative research with teachers of Portuguese” (2011/2013. Text genres have been suggested by curricular proposals as a central object for teachers who work with Portuguese language teaching; for this, it is relevant to analyze the documents in the realm of the ongoing research. As theoretical references, we used authors who propose a didactic model based on the development of language skills and linguistic reasoning (MENDONÇA, 2006 which in turn are based on an interactional conception of language (BRONCKART, 1999; SCHNEUWLY; DOLZ, 2004. Document analysis was used as methodology, which envisions assessment of pieces of information in documents as well as their outcomes. The data show that the 2012 curricular proposal is more adequate to Portuguese language teaching than the first one, mainly for its theoretical and methodological grounding, which emphasize the development of students’ linguistic and discursive skills. Guided by an interactionist notion – unlike the norm-centered 2001 proposal – the 2012 document fosters the development of linguistic reasoning and usage skills.

  4. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    Science.gov (United States)

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  5. IMPROVING THE INTEROPERABILITY OF DISASTER MODELS: A CASE STUDY OF PROPOSING FIREML FOR FOREST FIRE MODEL

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper presents a new standardized data format named Fire Markup Language (FireML, extended by the Geography Markup Language (GML of OGC, to elaborate upon the fire hazard model. The proposed FireML is able to standardize the input and output documents of a fire model for effectively communicating with different disaster management systems to ensure a good interoperability. To demonstrate the usage of FireML and testify its feasibility, an adopted forest fire spread model being compatible with FireML is described. And a 3DGIS disaster management system is developed to simulate the dynamic procedure of forest fire spread with the defined FireML documents. The proposed approach will enlighten ones who work on other disaster models' standardization work.

  6. Improving the Interoperability of Disaster Models: a Case Study of Proposing Fireml for Forest Fire Model

    Science.gov (United States)

    Jiang, W.; Wang, F.; Meng, Q.; Li, Z.; Liu, B.; Zheng, X.

    2018-04-01

    This paper presents a new standardized data format named Fire Markup Language (FireML), extended by the Geography Markup Language (GML) of OGC, to elaborate upon the fire hazard model. The proposed FireML is able to standardize the input and output documents of a fire model for effectively communicating with different disaster management systems to ensure a good interoperability. To demonstrate the usage of FireML and testify its feasibility, an adopted forest fire spread model being compatible with FireML is described. And a 3DGIS disaster management system is developed to simulate the dynamic procedure of forest fire spread with the defined FireML documents. The proposed approach will enlighten ones who work on other disaster models' standardization work.

  7. "The Wonder Years" of XML.

    Science.gov (United States)

    Gazan, Rich

    2000-01-01

    Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…

  8. Non-Stationary Inflation and the Markup: an Overview of the Research and some Implications for Policy

    OpenAIRE

    Bill Russell

    2006-01-01

    This paper reports on research into the negative relationship between inflation and the markup. It is argued that this relationship can be thought of as ‘long-run’ in nature which suggests that inflation has a persistent effect on the markup and, therefore, the real wage. A ‘rule of thumb’ from the estimates indicate that a 10 percentage point increase in inflation (as occurred worldwide in the 1970s) is associated with around a 7 per cent fall in the markup accompanied by a similar increase ...

  9. Performance analysis of Java APIS for XML processing

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as ...

  10. Processing XML with Java – a performance benchmark

    OpenAIRE

    Oliveira, Bruno; Santos, Vasco; Belo, Orlando

    2013-01-01

    Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, suc...

  11. WaterML: an XML Language for Communicating Water Observations Data

    Science.gov (United States)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  12. XML DTD and Schemas for HDF-EOS

    Science.gov (United States)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  13. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  14. Markdoc

    DEFF Research Database (Denmark)

    Haghish, E. F.

    2016-01-01

    for typesetting the document. In this article, I introduce markdoc, a software package for interactive literate programming and generating dynamic-analysis documents in Stata. markdoc recognizes Markdown, LATEX, and HTML markup languages and can export documents in several formats, such as PDF, Microsoft Office.......docx, OpenOffice and LibreOffice.odt, LATEX, HTML, ePub, and Markdown....

  15. Automation and integration of components for generalized semantic markup of electronic medical texts.

    Science.gov (United States)

    Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.

  16. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  17. Monopoly, Pareto and Ramsey mark-ups

    OpenAIRE

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.

  18. Proceedings of Conference on Language Documentation and Linguistic Theory 3

    OpenAIRE

    Austin, PK

    2011-01-01

    A collection dealing with language documentation and linguistic theory from a conference held at SOAS on 19- 20 November 2011. Papers by Peter Austin, Oliver Bond, Lutz Marten & David Nathan, Balthasar Bickel, Anju Saxena, Anvita Abbi, Cathryn Bartram, Henrik Bergqvist, Martine Bruil, Eliane Camargo & Sabine Reiter, Kearsy Cormier, Jordan Fenlon, Ramas Rentelis, & Adam Schembri, Simeon Floyd & Martine Bruil, Diana Forker, Michael Franjieh & Kilu von Prince, Brent Henderson & Charl...

  19. Integrating deep and shallow natural language processing components : representations and hybrid architectures

    OpenAIRE

    Schäfer, Ulrich

    2006-01-01

    We describe basic concepts and software architectures for the integration of shallow and deep (linguistics-based, semantics-oriented) natural language processing (NLP) components. The main goal of this novel, hybrid integration paradigm is improving robustness of deep processing. After an introduction to constraint-based natural language parsing, we give an overview of typical shallow processing tasks. We introduce XML standoff markup as an additional abstraction layer that eases integration ...

  20. A methodology for improving the SIS-RT in analyzing the traceability of the documents written in Korean language

    International Nuclear Information System (INIS)

    Yoo, Yeong Jae; Kim, Man Cheol; Seong, Poong Hyun

    2002-01-01

    Inspection is widely believed to be an effective software verification and validation (V and V) method. However, software inspection is labor-intensive. This labor-intensive nature is compounded by a view that since software inspection uses little technology, they do not fit in well with a more technology-oriented development environment. Nevertheless, software inspection is gaining in popularity. The researchers of KAIST I and C laboratory developed the software tool managing and supporting inspection tasks, named SIS-RT. SIS-RT is designed to partially automate the software inspection processes. SIS-RT supports the analyses of traceability between the spec documents. To make SIS-RT prepared for the spec document written in Korean language, certain techniques in natural language processing have been reviewed. Among those, the case grammar is most suitable for the analyses of Korean language. In this paper, the methodology for analyzing the traceability between spec documents written in Korean language will be proposed based on the case grammar

  1. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    Science.gov (United States)

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  2. Fuzzy Markup language : a new solution for transparent intelligent agents

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.

    2011-01-01

    From an industrial and technological point of view, fuzzy control theory deals with the development of a particular system controller on a specific hardware by means of an open or legacy programming language that is useful to address, in a high-level fashion, the hardware constraints. Independently

  3. Using XML to Separate Content from the Presentation Software in eLearning Applications

    Science.gov (United States)

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  4. The semantics of Chemical Markup Language (CML): dictionaries and conventions

    Science.gov (United States)

    2011-01-01

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509

  5. The semantics of Chemical Markup Language (CML): dictionaries and conventions.

    Science.gov (United States)

    Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens

    2011-10-14

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  6. Defining Linkages between the GSC and NSF's LTER Program: How the Ecological Metadata Language (EML) Relates to GCDML and Other Outcomes

    Science.gov (United States)

    Inigo San Gil; Wade Sheldon; Tom Schmidt; Mark Servilla; Raul Aguilar; Corinna Gries; Tanya Gray; Dawn Field; James Cole; Jerry Yun Pan; Giri Palanisamy; Donald Henshaw; Margaret O' Brien; Linda Kinkel; Kathrine McMahon; Renzo Kottmann; Linda Amaral-Zettler; John Hobbie; Philip Goldstein; Robert P. Guralnick; James Brunt; William K. Michener

    2008-01-01

    The Genomic Standards Consortium (GSC) invited a representative of the Long-Term Ecological Research (LTER) to its fifth workshop to present the Ecological Metadata Language (EML) metadata standard and its relationship to the Minimum Information about a Genome/Metagenome Sequence (MIGS/MIMS) and its implementation, the Genomic Contextual Data Markup Language (GCDML)....

  7. What language is your doctor speaking? Facing the problems of translating medical documents into English

    Directory of Open Access Journals (Sweden)

    Mićović Dragoslava

    2013-01-01

    Full Text Available What is translation - a craft, an art, a profession or a job? Although one of the oldest human activities, translation has not still been fully defined, and it is still young in terms of an academic discipline. The paper defines the difference between translation and interpreting and then attempts to find the answer to the question what characteristics, knowledge and skills a translator must have, particularly the one involved in court translation, and where his/her place in the communication process (both written and oral communication is. When translating medical documentation, a translator is set within a medical language environment as an intermediary between two doctors (in other words, two professionals in the process of communication which would be impossible without him, since it is conducted in two different languages. The paper also gives an insight into types of medical documentation and who they are intended for. It gives practical examples of the problems faced in the course of translation of certain types of medical documentation (hospital discharge papers, diagnoses, case reports,.... Is it possible to make this kind of communication between professionals (doctors standardized, which would subsequently make their translation easier? Although great efforts are made in Serbia regarding medical language and medical terminology, the conclusion is that specific problems encountered by translators can hardly be overcome using only dictionaries and translation manuals.

  8. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    Science.gov (United States)

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  9. On Training in Language Documentation and Capacity Building in Papua New Guinea: A Response to Bird et al.

    Science.gov (United States)

    Brooks, Joseph D.

    2015-01-01

    In a recent article, Bird et al. (2013) discuss a workshop held at the University of Goroka in Papua New Guinea (PNG) in 2012. The workshop was intended to offer a new methodological framework for language documentation and capacity building that streamlines the documentation process and accelerates the global effort to document endangered…

  10. 77 FR 20835 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-04-06

    ... Interchange (EDI). This notice also describes test particulars including commencement date, eligibility... Electronic Data Interchange (EDI) as part of the Document Image System (DIS) test. DIS is currently a stand... with supporting information via EDI in an Extensible Markup Language (XML) format, in lieu of...

  11. The structure of an entry in the National corpus of Tuvan language

    Directory of Open Access Journals (Sweden)

    Mengi V. Ondar

    2016-12-01

    Full Text Available Contemporary information technologies and mathematical modelling has made creating corpora of natural languages significantly easier. A corpus is an information and reference system based on a collection of digitally processed texts. A corpus includes various written and oral texts in the given language, a set of dictionaries and markup – information on the properties of the text. It is the presence of the markup which distinguishes a corpus from an electronic library. At the moment, national corpora are being set up for many languages of the Russian Federation, including those of the Turkic peoples. Faculty members, postgraduate and undergraduate students at Tuvan State University and Siberian Federal University are working on the National corpus of Tuvan language. This article describes the structure of a dictionary entry in the National corpus of Tuvan language. The corpus database comprises the following tables: MAIN – the headword table, RUS, ENG, GER — translations of the headword into three languages, MORPHOLOGY — the table containing morphological data on the headword. The database is built in Microsoft Office Access. Working with the corpus dictionary includes the following functions: adding, editing and removing an entry, entry search (with transcription, setting and visualizing morphological features of a headword. The project allows us to view the corpus dictionary as a multi-structure entity with a complex hierarchical structure and a dictionary entry as its key component. The corpus dictionary we developed can be used for studying Tuvan language in its pronunciation, orthography and word analysis, as well as for searching for words and collocations in the texts included into the corpus.

  12. Introduction to Beautiful Soup

    Directory of Open Access Journals (Sweden)

    Jeri Wieringa

    2012-12-01

    Full Text Available Beautiful Soup is a Python library for getting data out of HTML, XML, and other markup languages. Say you’ve found some webpages that display data relevant to your research, such as date or address information, but that do not provide any way of downloading the data directly. Beautiful Soup helps you pull particular content from a webpage, remove the HTML markup, and save the information. It is a tool for web scraping that helps you clean up and parse the documents you have pulled down from the web.

  13. Introduction to Beautiful Soup

    OpenAIRE

    Jeri Wieringa

    2012-01-01

    Beautiful Soup is a Python library for getting data out of HTML, XML, and other markup languages. Say you’ve found some webpages that display data relevant to your research, such as date or address information, but that do not provide any way of downloading the data directly. Beautiful Soup helps you pull particular content from a webpage, remove the HTML markup, and save the information. It is a tool for web scraping that helps you clean up and parse the documents you have pulled down from t...

  14. FILM/TALK: Photography A Visual Language: The Landscape Document Part 1

    OpenAIRE

    Murray, Matthew; United Nations of Photography; Film courtesy of Reece Pickering and Tchad Findlay

    2016-01-01

    This short film is part of a conversation featuring photographers Marc Wilson, Brian David Stevens and Matthew Murray hosted by Ian McGuffie. In this film they passionately discuss the highly personal inspirations for their work, their process of working, the importance of history in personal experience and the role of landscape photography as a social document. \\ud \\ud This talk was part of a day of talks titled Photography A Visual language: A Day of Conversation held by us in collaboration...

  15. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  16. DEMAND FOR AND SUPPLY OF MARK-UP AND PLS FUNDS IN ISLAMIC BANKING: SOME ALTERNATIVE EXPLANATIONS

    OpenAIRE

    KHAN, TARIQULLAH

    1995-01-01

    Profit and loss-sharing (PLS) and bai’ al murabahah lil amir bil shira (mark-up) are the two parent principles of Islamic financing. The use of PLS is limited and that of mark-up overwhelming in the operations of the Islamic banks. Several studies provide different explanations for this phenomenon. The dominant among these is the moral hazard hypothesis. Some alternative explanations are given in the present paper. The discussion is based on both demand (user of funds) and supply (bank) side ...

  17. A program code generator for multiphysics biological simulation using markup languages.

    Science.gov (United States)

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  18. Silicon Graphics' IRIS InSight: An SGML Success Story.

    Science.gov (United States)

    Glushko, Robert J.; Kershner, Ken

    1993-01-01

    Offers a case history of the development of the Silicon Graphics "IRIS InSight" system, a system for viewing on-line documentation using Standard Generalized Markup Language. Notes that SGML's explicit encoding of structure and separation of structure and presentation make possible structure-based search, alternative structural views of…

  19. An Electronic Publishing Model for Academic Publishers.

    Science.gov (United States)

    Gold, Jon D.

    1994-01-01

    Describes an electronic publishing model based on Standard Generalized Markup Language (SGML) and considers its use by an academic publisher. Highlights include how SGML is used to produce an electronic book, hypertext, methods of delivery, intellectual property rights, and future possibilities. Sample documents are included. (two references) (LRW)

  20. CURIE Syntax 1.0, A syntax for expressing Compact URIs

    NARCIS (Netherlands)

    institution W3C; M. Birbeck (Mark); not CWI et al

    2007-01-01

    textabstractThe aim of this document is to outline a syntax for expressing URIs in a generic, abbreviated syntax. While it has been produced in conjunction with the HTML Working Group, it is not specifically targeted at use by XHTML Family Markup Languages. Note that the target audience for this

  1. How Does XML Help Libraries?

    Science.gov (United States)

    Banerjee, Kyle

    2002-01-01

    Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…

  2. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  3. Modularization and Structured Markup for Learning Content in an Academic Environment

    Science.gov (United States)

    Schluep, Samuel; Bettoni, Marco; Schar, Sissel Guttormsen

    2006-01-01

    This article aims to present a flexible component model for modular, web-based learning content, and a simple structured markup schema for the separation of content and presentation. The article will also contain an overview of the dynamic Learning Content Management System (dLCMS) project, which implements these concepts. Content authors are a…

  4. The Measurement of Relevance Amount of Documents That By Using of Google cross-language retrieval About Agriculture Subject Area are Retrieved

    Directory of Open Access Journals (Sweden)

    Fatemeh Jamshidi Ghahfarokhi

    2014-02-01

    Full Text Available In this study, the relevance amount of documents has been investigated by using google cross-language retrieval tools about a agriculture subject area in cross-language retrieval form, are retrieved. For this purpose, by using Persian journals articles that have had English abstracts, Persian phrases and subject terms with their English equivalent were extracted. In three class us, thirty number of phrases and subject terms of agriculture area were extracted: First class, subject phrases that only in agriculture are used; Secondary, agriculture subject terms that in other fields are used too; Third class, agriculture subject terms that out of this field are considered as public term. Then by these phrases and terms, documents were searched, and relevance amount of search results are investigated. Results of study showed that google cross-language retrieval tools for two classes of phrases and terms, in cross-language retrieval of relevance document about agriculture subject area, aren`t succeed: one class, agriculture subject terms that in other fields are used too. other class, agriculture subject terms that out of agriculture field are considered as public term. Google cross-language retrieval tools about subject phrase and terms that only in agriculture field are used, are performance rather desirable than other two class of phrase and terms

  5. Visual Guidebooks: Documenting a Personal Thinking Language

    Science.gov (United States)

    Shambaugh, Neal; Beacham, Cindy

    2017-01-01

    A personal thinking language consists of verbal and visual means to transform ideas to action in social and work settings. This verbal and visual interaction of images and language is influenced by one's personal history, cultural expectations and professional practices. The article first compares a personal thinking language to other languages…

  6. XForms 1.1 candidate recommendation

    NARCIS (Netherlands)

    J.M. Boyer

    2007-01-01

    textabstractXForms is an XML application that represents the next generation of forms for the Web. XForms is not a free-standing document type, but is intended to be integrated into other markup languages, such as XHTML, ODF or SVG. An XForms-based web form gathers and processes XML data using an

  7. Modeling of the positioning system and visual mark-up of historical cadastral maps

    Directory of Open Access Journals (Sweden)

    Tomislav Jakopec

    2013-03-01

    Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for

  8. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities’ libraries

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Introduction: Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries’ Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. Materials and Methods: The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. Results: The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). Conclusion: It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers

  9. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities' libraries.

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries' Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources

  10. Midstream streamlining: crude-oil document exchange would be faster with fewer errors

    Energy Technology Data Exchange (ETDEWEB)

    Roche, P.

    2001-11-01

    WellPoint Systems, a Calgary-based oil and gas software firm has designed an Internet-based crude-oil document exchange system, dubbed CDOX, that would see battery operators filling out production forecast forms on the systems they are using now, but instead of faxing the form to pipeline and terminal operators the operator could automatically transmit data over the Internet to the intended companies' computers. Unlike e-mail, the information would be put in place at the receiving end without human intervention. At the heart of the system is a digital hub that securely routes documents from one system to another. Each participating company only has to keep its own profile information up to date, instead of managing hundreds of customer e-mail addresses. The system would run on software provided by webMethods Inc., a Fairfax, Virginia-based B2B software provider. Documents will be in extensible markup language (XML) which is more flexible than HTML, allowing groups of users to create their own tags or extensions, for integrated information flows and automated business-to-business transactions. Advantages are lower cost due to automated loading; CDOX would eliminate the need for manual retyping of faxed data and the errors made during this process; pipeline operators could start analyzing the data immediately rather than waiting to have the data keyed into their systems. A one-year pilot project is now being organized by WellPoint Systems. The pilot project will include crude oil shippers and pipeline operators. WellPoint hopes that the monthly Form A battery operators' forecast will be moving on the system within three months. During the rest of the year pilot project seven other documents are planned to be added, including notice of shipment and shippers balance.

  11. Development of a traceability analysis method based on case grammar for NPP requirement documents written in Korean language

    International Nuclear Information System (INIS)

    Yoo, Yeong Jae; Seong, Poong Hyun; Kim, Man Cheol

    2004-01-01

    Software inspection is widely believed to be an effective method for software verification and validation (V and V). However, software inspection is labor-intensive and, since it uses little technology, software inspection is viewed upon as unsuitable for a more technology-oriented development environment. Nevertheless, software inspection is gaining in popularity. KAIST Nuclear I and C and Information Engineering Laboratory (NICIEL) has developed software management and inspection support tools, collectively named 'SIS-RT.' SIS-RT is designed to partially automate the software inspection processes. SIS-RT supports the analyses of traceability between a given set of specification documents. To make SIS-RT compatible for documents written in Korean, certain techniques in natural language processing have been studied. Among the techniques considered, case grammar is most suitable for analyses of the Korean language. In this paper, we propose a methodology that uses a case grammar approach to analyze the traceability between documents written in Korean. A discussion regarding some examples of such an analysis will follow

  12. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  13. Web-based X-ray quality control documentation.

    Science.gov (United States)

    David, George; Burnett, Lou Ann; Schenkel, Robert

    2003-01-01

    The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal

  14. Information Security and Wireless: Alternate Approaches for Controlling Access to Critical Information

    National Research Council Canada - National Science Library

    Nandram, Winsome

    2004-01-01

    .... Typically, network managers implement countermeasures to augment security. The goal of this thesis is to research approaches that compliment existing security measures with fine grain access control measures. The Extensible Markup Language (XML) is adopted to accommodate such granular access control as it provides the mechanisms for scaling security down to the document content level.

  15. XML Schema Guide for Primary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  16. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  17. Indian Language Document Analysis and Understanding

    Indian Academy of Sciences (India)

    documents would contain text of more than one script (for example, English, Hindi and the ... O'Gorman and Govindaraju provides a good overview on document image ... word level in bilingual documents containing Roman and Tamil scripts.

  18. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  19. Informatics in radiology: An open-source and open-access cancer biomedical informatics grid annotation and image markup template builder.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L

    2012-01-01

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

  20. Analysis of Documents Published in Scopus Database on Foreign Language Learning through Mobile Learning: A Content Analysis

    Science.gov (United States)

    Uzunboylu, Huseyin; Genc, Zeynep

    2017-01-01

    The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…

  1. XML Schema Guide for Secondary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  2. The medical simulation markup language - simplifying the biomechanical modeling workflow.

    Science.gov (United States)

    Suwelack, Stefan; Stoll, Markus; Schalck, Sebastian; Schoch, Nicolai; Dillmann, Rüdiger; Bendl, Rolf; Heuveline, Vincent; Speidel, Stefanie

    2014-01-01

    Modeling and simulation of the human body by means of continuum mechanics has become an important tool in diagnostics, computer-assisted interventions and training. This modeling approach seeks to construct patient-specific biomechanical models from tomographic data. Usually many different tools such as segmentation and meshing algorithms are involved in this workflow. In this paper we present a generalized and flexible description for biomechanical models. The unique feature of the new modeling language is that it not only describes the final biomechanical simulation, but also the workflow how the biomechanical model is constructed from tomographic data. In this way, the MSML can act as a middleware between all tools used in the modeling pipeline. The MSML thus greatly facilitates the prototyping of medical simulation workflows for clinical and research purposes. In this paper, we not only detail the XML-based modeling scheme, but also present a concrete implementation. Different examples highlight the flexibility, robustness and ease-of-use of the approach.

  3. Semiotics, Information Science, Documents and Computers.

    Science.gov (United States)

    Warner, Julian

    1990-01-01

    Discusses the relationship and value of semiotics to the established domains of information science. Highlights include documentation; computer operations; the language of computing; automata theory; linguistics; speech and writing; and the written language as a unifying principle for the document and the computer. (93 references) (LRW)

  4. ScienceCentral: open access full-text archive of scientific journals based on Journal Article Tag Suite regardless of their languages.

    Science.gov (United States)

    Huh, Sun

    2013-01-01

    ScienceCentral, a free or open access, full-text archive of scientific journal literature at the Korean Federation of Science and Technology Societies, was under test in September 2013. Since it is a Journal Article Tag Suite-based full text database, extensible markup language files of all languages can be presented, according to Unicode Transformation Format 8-bit encoding. It is comparable to PubMed Central: however, there are two distinct differences. First, its scope comprises all science fields; second, it accepts all language journals. Launching ScienceCentral is the first step for free access or open access academic scientific journals of all languages to leap to the world, including scientific journals from Croatia.

  5. Inclusion of Students with Special Education Needs in French as a Second Language Programs: A Review of Canadian Policy and Resource Documents

    Science.gov (United States)

    Muhling, Stefanie; Mady, Callie

    2017-01-01

    This article describes a document analysis of policy and resource documents pertaining to inclusion of students with special education needs (SSEN) in Canadian French as a Second Language (FSL) programs. By recognizing gaps and acknowledging advancements, we aim to inform current implementation and future development of inclusive policy. Document…

  6. FMS: A Format Manipulation System for Automatic Production of Natural Language Documents, Second Edition. Final Report.

    Science.gov (United States)

    Silver, Steven S.

    FMS/3 is a system for producing hard copy documentation at high speed from free format text and command input. The system was originally written in assembler language for a 12K IBM 360 model 20 using a high speed 1403 printer with the UCS-TN chain option (upper and lower case). Input was from an IBM 2560 Multi-function Card Machine. The model 20…

  7. Language Revitalization.

    Science.gov (United States)

    Hinton, Leanne

    2003-01-01

    Surveys developments in language revitalization and language death. Focusing on indigenous languages, discusses the role and nature of appropriate linguistic documentation, possibilities for bilingual education, and methods of promoting oral fluency and intergenerational transmission in affected languages. (Author/VWL)

  8. Cross-language information retrieval using PARAFAC2.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Chew, Peter; Abdelali, Ahmed (New Mexico State University, Las Cruces, NM); Kolda, Tamara Gibson

    2007-05-01

    A standard approach to cross-language information retrieval (CLIR) uses Latent Semantic Analysis (LSA) in conjunction with a multilingual parallel aligned corpus. This approach has been shown to be successful in identifying similar documents across languages - or more precisely, retrieving the most similar document in one language to a query in another language. However, the approach has severe drawbacks when applied to a related task, that of clustering documents 'language-independently', so that documents about similar topics end up closest to one another in the semantic space regardless of their language. The problem is that documents are generally more similar to other documents in the same language than they are to documents in a different language, but on the same topic. As a result, when using multilingual LSA, documents will in practice cluster by language, not by topic. We propose a novel application of PARAFAC2 (which is a variant of PARAFAC, a multi-way generalization of the singular value decomposition [SVD]) to overcome this problem. Instead of forming a single multilingual term-by-document matrix which, under LSA, is subjected to SVD, we form an irregular three-way array, each slice of which is a separate term-by-document matrix for a single language in the parallel corpus. The goal is to compute an SVD for each language such that V (the matrix of right singular vectors) is the same across all languages. Effectively, PARAFAC2 imposes the constraint, not present in standard LSA, that the 'concepts' in all documents in the parallel corpus are the same regardless of language. Intuitively, this constraint makes sense, since the whole purpose of using a parallel corpus is that exactly the same concepts are expressed in the translations. We tested this approach by comparing the performance of PARAFAC2 with standard LSA in solving a particular CLIR problem. From our results, we conclude that PARAFAC2 offers a very promising alternative to

  9. CI-Miner: A Semantic Methodology to Integrate Scientists, Data and Documents through the Use of Cyber-Infrastructure

    Science.gov (United States)

    Pinheiro da Silva, P.; CyberShARE Center of Excellence

    2011-12-01

    Scientists today face the challenge of rethinking the manner in which they document and make available their processes and data in an international cyber-infrastructure of shared resources. Some relevant examples of new scientific practices in the realm of computational and data extraction sciences include: large scale data discovery; data integration; data sharing across distinct scientific domains, systematic management of trust and uncertainty; and comprehensive support for explaining processes and results. This talk introduces CI-Miner - an innovative hands-on, open-source, community-driven methodology to integrate these new scientific practices. It has been developed in collaboration with scientists, with the purpose of capturing, storing and retrieving knowledge about scientific processes and their products, thereby further supporting a new generation of science techniques based on data exploration. CI-Miner uses semantic annotations in the form of W3C Ontology Web Language-based ontologies and Proof Markup Language (PML)-based provenance to represent knowledge. This methodology specializes in general-purpose ontologies, projected into workflow-driven ontologies(WDOs) and into semantic abstract workflows (SAWs). Provenance in PML is CI-Miner's integrative component, which allows scientists to retrieve and reason with the knowledge represented in these new semantic documents. It serves additionally as a platform to share such collected knowledge with the scientific community participating in the international cyber-infrastructure. The integrated semantic documents that are tailored for the use of human epistemic agents may also be utilized by machine epistemic agents, since the documents are based on W3C Resource Description Framework (RDF) notation. This talk is grounded upon interdisciplinary lessons learned through the use of CI-Miner in support of government-funded national and international cyber-infrastructure initiatives in the areas of geo

  10. 49 CFR 1580.103 - Location and shipping information for certain rail cars.

    Science.gov (United States)

    2010-10-01

    ... the following methods: (1) Electronic data transmission in spreadsheet format. (2) Electronic data transmission in Hyper Text Markup Language (HTML) format. (3) Electronic data transmission in Extensible Markup Language (XML). (4) Facsimile transmission of a hard copy spreadsheet in tabular format. (5) Posting the...

  11. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  12. Plug-and-Play XML

    Science.gov (United States)

    Schweiger, Ralf; Hoelzer, Simon; Altmann, Udo; Rieger, Joerg; Dudeck, Joachim

    2002-01-01

    The application of XML (Extensible Markup Language) is still costly. The authors present an approach to ease the development of XML applications. They have developed a Web-based framework that combines existing XML resources into a comprehensive XML application. The XML framework is model-driven, i.e., the authors primarily design XML document models (XML schema, document type definition), and users can enter, search, and view related XML documents using a Web browser. The XML model itself is flexible and might be composed of existing model standards. The second part of the paper relates the approach of the authors to some problems frequently encountered in the clinical documentation process. PMID:11751802

  13. Language Ideologies of Arizona Voters, Language Managers, and Teachers

    Science.gov (United States)

    Fitzsimmons-Doolan, Shannon

    2014-01-01

    Arizona is the site of many explicit language policies as well as ongoing scholarly discussions of related language ideologies--beliefs about the role of language in society. This study adds a critical piece to the investigation of the role of ideologies in language policy processes by thoroughly documenting language ideologies expressed by a…

  14. The Role of Irish Language Teaching: Cultural Identity Formation or Language Revitalization?

    Science.gov (United States)

    Slatinská, Anna; Pecníková, Jana

    2017-01-01

    The focal point of the article is Irish language teaching in the Republic of Ireland. Firstly, we deal with the most significant documents where the status of the Irish language is being defined. In this respect, for the purposes of analysis, we have chosen the document titled "20 Year Strategy for the Irish language" which plays a…

  15. Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS.

    Science.gov (United States)

    Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J

    2016-10-01

    Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. G-InforBIO: integrated system for microbial genomics

    Directory of Open Access Journals (Sweden)

    Abe Takashi

    2006-08-01

    Full Text Available Abstract Background Genome databases contain diverse kinds of information, including gene annotations and nucleotide and amino acid sequences. It is not easy to integrate such information for genomic study. There are few tools for integrated analyses of genomic data, therefore, we developed software that enables users to handle, manipulate, and analyze genome data with a variety of sequence analysis programs. Results The G-InforBIO system is a novel tool for genome data management and sequence analysis. The system can import genome data encoded as eXtensible Markup Language documents as formatted text documents, including annotations and sequences, from DNA Data Bank of Japan and GenBank encoded as flat files. The genome database is constructed automatically after importing, and the database can be exported as documents formatted with eXtensible Markup Language or tab-deliminated text. Users can retrieve data from the database by keyword searches, edit annotation data of genes, and process data with G-InforBIO. In addition, information in the G-InforBIO database can be analyzed seamlessly with nine different software programs, including programs for clustering and homology analyses. Conclusion The G-InforBIO system simplifies genome analyses by integrating several available software programs to allow efficient handling and manipulation of genome data. G-InforBIO is freely available from the download site.

  17. XML for Detector Description at GLAST

    Energy Technology Data Exchange (ETDEWEB)

    Bogart, Joanne

    2002-04-30

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector.

  18. XML Syntax for Clinical Laboratory Procedure Manuals

    OpenAIRE

    Saadawi, Gilan; Harrison, James H.

    2003-01-01

    We have developed a document type description (DTD) in Extensable Markup Language (XML)1 for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large b...

  19. XML for detector description at GLAST

    International Nuclear Information System (INIS)

    Bogart, J.; Favretto, D.; Giannitrapani, R.

    2001-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. The author describes its use for a GLAST detector

  20. XML for Detector Description at GLAST

    International Nuclear Information System (INIS)

    Bogart, Joanne

    2002-01-01

    The problem of representing a detector in a form which is accessible to a variety of applications, allows retrieval of information in ways which are natural to those applications, and is maintainable has been vexing physicists for some time. Although invented to address an entirely different problem domain, the document markup meta-language XML is well-suited to detector description. This paper describes its use for a GLAST detector

  1. New training material management system. Tasks and first experiences

    International Nuclear Information System (INIS)

    Kummer, Simon; Schoenfelder, Christian

    2012-01-01

    Rather than bringing an end to the paper based era, the age of information technology has brought with it numerous types of publication and a plethora of new media. Anyone involved in documentation or digital paper based publication simply cannot escape the requirement for even more information to be printed and made available digitally in even shorter periods of time. Anyone wrestling with this issue is bound to hit upon buzz words such as ''content management'' and ''XML'' (Extensible Markup Language - ability to define the role of content within a document using tags). (orig.)

  2. Firm Dynamics and Markup Variations: Implications for Sunspot Equilibria and Endogenous Economic Fluctuation

    OpenAIRE

    Nir Jaimovich

    2007-01-01

    This paper analyzes how the interaction between firms’ entry-and-exit decisions and variations in competition gives rise to self-fulfilling, expectation-driven fluctuations in aggregate economic activity and in measured total factor productivity (TFP). The analysis is based on a dynamic general equilibrium model in which net business formation is endogenously procyclical and leads to endogenous countercyclical variations in markups. This interaction leads to indeterminacy in which economic fl...

  3. Head First HTML5 Programming Building Web Apps with JavaScript

    CERN Document Server

    Freeman, Eric

    2011-01-01

    HTML has been on a wild ride. Sure, HTML started as a mere markup language, but more recently HTML's put on some major muscle. Now we've got a language tuned for building web applications with Web storage, 2D drawing, offline support, sockets and threads, and more. And to speak this language you've got to go beyond HTML5 markup and into the world of the DOM, events, and JavaScript APIs. Now you probably already know all about HTML markup (otherwise known as structure) and you know all aboutCSS style (presentation), but what you've been missing is JavaScript (behavior). If all you know about

  4. Handheld Computing

    National Research Council Canada - National Science Library

    Alford, Kenneth L

    2005-01-01

    .... It outlines some of the considerations involved in a PDA procurement, discusses four tools for developing PDA resource materials -- programming tools, hypertext markup language- and eXtensible markup...

  5. Modeling the Arden Syntax for medical decisions in XML.

    Science.gov (United States)

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  6. A Conversion Tool for Mathematical Expressions in Web XML Files.

    Science.gov (United States)

    Ohtake, Nobuyuki; Kanahori, Toshihiro

    2003-01-01

    This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…

  7. BIBLIOGRAPHY ON LANGUAGE DEVELOPMENT.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS BIBLIOGRAPHY LISTS MATERIAL ON VARIOUS ASPECTS OF LANGUAGE DEVELOPMENT. APPROXIMATELY 65 UNANNOTATED REFERENCES ARE PROVIDED TO DOCUMENTS DATING FROM 1958 TO 1966. JOURNALS, BOOKS, AND REPORT MATERIALS ARE LISTED. SUBJECT AREAS INCLUDED ARE THE NATURE OF LANGUAGE, LINGUISTICS, LANGUAGE LEARNING, LANGUAGE SKILLS, LANGUAGE PATTERNS, AND…

  8. Domain-specific markup languages and descriptive metadata: their functions in scientific resource discoveryLinguagens de marcação específicas por domínio e metadados descritivos: funções para a descoberta de recursos científicos

    Directory of Open Access Journals (Sweden)

    Marcia Lei Zeng

    2010-01-01

    Full Text Available While metadata has been a strong focus within information professionals‟ publications, projects, and initiatives during the last two decades, a significant number of domain-specific markup languages have also been developing on a parallel path at the same rate as metadata standards; yet, they do not receive comparable attention. This essay discusses the functions of these two kinds of approaches in scientific resource discovery and points out their potential complementary roles through appropriate interoperability approaches.Enquanto os metadados tiveram grande foco em publicações, projetos e iniciativas dos profissionais da informação durante as últimas duas décadas, um número significativo de linguagens de marcação específicas por domínio também se desenvolveram paralelamente a uma taxa equivalente aos padrões de metadados; mas ainda não recebem atenção comparável. Esse artigo discute as funções desses dois tipos de abordagens na descoberta de recursos científicos e aponta papéis potenciais e complementares por meio de abordagens de interoperabilidade apropriadas.

  9. BASIC Instructional Program: System Documentation.

    Science.gov (United States)

    Dageforde, Mary L.

    This report documents the BASIC Instructional Program (BIP), a "hands-on laboratory" that teaches elementary programming in the BASIC language, as implemented in the MAINSAIL language, a machine-independent revision of SAIL which should facilitate implementation of BIP on other computing systems. Eight instructional modules which make up…

  10. NAVAIR Portable Source Initiative (NPSI) Data Preparation Standard V2.2: NPSI DPS V2.2

    Science.gov (United States)

    2012-05-22

    Keyhole Markup Language (file format) KMZ ............................................................................. Keyhole Markup...required for the geo-specific texture may differ within the database depending on the mission parameters. When operating close to the ground (e.g

  11. New training material management system. Tasks and first experiences

    Energy Technology Data Exchange (ETDEWEB)

    Kummer, Simon; Schoenfelder, Christian [AREVA NP GmbH (Germany)

    2012-11-01

    Rather than bringing an end to the paper based era, the age of information technology has brought with it numerous types of publication and a plethora of new media. Anyone involved in documentation or digital paper based publication simply cannot escape the requirement for even more information to be printed and made available digitally in even shorter periods of time. Anyone wrestling with this issue is bound to hit upon buzz words such as ''content management'' and ''XML'' (Extensible Markup Language - ability to define the role of content within a document using tags). (orig.)

  12. RTF Pocket Guide

    CERN Document Server

    Burke, Sean

    2008-01-01

    Rich Text Format, or RTF, is the internal markup language used by Microsoft Word and understood by dozens of other word processors. RTF is a universal file format that pervades practically every desktop. Because RTF is text, it's much easier to generate and process than binary .doc files. Any programmer working with word processing documents needs to learn enough RTF to get around, whether it's to format text for Word (or almost any other word processor), to make global changes to an existing document, or to convert Word files to (or from) another format. RTF Pocket Guide is a concise and e

  13. DOCUMENTATION OF AFRICAN LANGUAGES: A PANACEA FOR ...

    African Journals Online (AJOL)

    user

    portend danger to the cultural and linguistic identity of most of the peoples of the ... danger difficult to be observed. ... Indeed, one thing that most people would welcome is the possibility ... language of diplomacy, commerce and the Internet.

  14. Linking Language and Categorization in Infancy

    Science.gov (United States)

    Ferguson, Brock; Waxman, Sandra

    2017-01-01

    Language exerts a powerful influence on our concepts. We review evidence documenting the developmental origins of a precocious link between language and object categories in very young infants. This collection of studies documents a cascading process in which early links between language and cognition provide the foundation for later, more precise…

  15. Building blocks of e-commerce

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    e-commerce possible are: html (hyper text markup language), XML (extensible markup .... Standard word processor outputs can be converted to ..... various types of transactions between organizations and it requires an expert to understand.

  16. Vietnamese Document Representation and Classification

    Science.gov (United States)

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

  17. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  18. The OCaml system release 4.04: Documentation and user's manual

    OpenAIRE

    Leroy, Xavier; Doligez, Damien; Frisch, Alain; Garrigue, Jacques; Rémy, Didier; Vouillon, Jérôme

    2016-01-01

    This manual documents the release 4.04 of the OCaml system. It is organized as follows. Part I, "An introduction to OCaml", gives an overview of the language. Part II, "The OCaml language", is the reference description of the language. Part III, "The OCaml tools", documents the compilers, toplevel system, and programming utilities. Part IV, "The OCaml library", describes the modules provided in the standard library.

  19. The OCaml system release 4.02: Documentation and user's manual

    OpenAIRE

    Leroy, Xavier; Doligez, Damien; Frisch, Alain; Garrigue, Jacques; Rémy, Didier; Vouillon, Jérôme

    2014-01-01

    This manual documents the release 4.02 of the OCaml system. It is organized as follows. Part I, "An introduction to OCaml", gives an overview of the language. Part II, "The OCaml language", is the reference description of the language. Part III, "The OCaml tools", documents the compilers, toplevel system, and programming utilities. Part IV, "The OCaml library", describes the modules provided in the standard library.

  20. The OCaml system release 4.06: Documentation and user's manual

    OpenAIRE

    Leroy , Xavier; Doligez , Damien; Frisch , Alain; Garrigue , Jacques; Rémy , Didier; Vouillon , Jérôme

    2017-01-01

    This manual documents the release 4.06 of the OCaml system. It is organized as follows. Part I, "An introduction to OCaml", gives an overview of the language. Part II, "The OCaml language", is the reference description of the language. Part III, "The OCaml tools", documents the compilers, toplevel system, and programming utilities. Part IV, "The OCaml library", describes the modules provided in the standard library.

  1. Designing a Dictionary for an Endangered Language Community: Lexicographical Deliberations, Language Ideological Clarifications

    Science.gov (United States)

    Kroskrity, Paul V.

    2015-01-01

    Dictionaries of endangered languages represent especially important products of language documentation, in part because they are usually the most familiar and useful genre of linguistic representation to endangered language community members. This familiarity, however, can become problematic when it is accompanied by language ideologies that…

  2. Effectiveness of a Parent-Implemented Language and Literacy Intervention in the Home Language

    Science.gov (United States)

    Ijalba, Elizabeth

    2015-01-01

    Few studies explore parent-implemented literacy interventions in the home language for young children with problems in language acquisition. A shift in children's use of the home language to English has been documented when English is the only language of instruction. When parents are not proficient in English, such language shift can limit…

  3. Language Motivation, Metacognitive Strategies and Language Performance: A Cause and Effect Correlation

    OpenAIRE

    Ag. Bambang Setiyadi; - Mahpul; Muhammad Sukirlan; Bujang Rahman

    2016-01-01

    Studies on motivation in language learning have been well documented. The role of motivation in determining the use of learning strategies has been identified and the correlation between motivation and language performance has been determined. However, how language motivation in EFL context is classified and how language motivation is inter-correlated with the use of metacognitive and language performance has still not become widespread in the literature on language learning. The current stud...

  4. Data Hiding and Security for XML Database: A TRBAC- Based Approach

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wan-song; SUN Wei; LIU Da-xin

    2005-01-01

    In order to cope with varying protection granularity levels of XML (eXtensible Markup Language) documents, we propose a TXAC (Two-level XML Access Control) framework, in which an extended TRBAC (Temporal Role-Based Access Control) approach is proposed to deal with the dynamic XML data. With different system components,TXAC algorithm evaluates access requests efficiently by appropriate access control policy in dynamic web environment.The method is a flexible and powerful security system offering a multi-level access control solution.

  5. Language Equality in International Cooperation. Esperanto Documents, New Series, No. 21.

    Science.gov (United States)

    Harry, Ralph; Mandel, Mark

    The policies of the United Nations with regard to the six official languages have left holes in the fabric of international cooperation. Maintaining language services in all six languages has proved to be an impossibility because of the scarcity of trained interpreters and translators between, for instance, Chinese and Arabic. English, French, and…

  6. Visualization Development of the Ballistic Threat Geospatial Optimization

    Science.gov (United States)

    2015-07-01

    topographic globes, Keyhole Markup Language (KML), and Collada files. World Wind gives the user the ability to import 3-D models and navigate...present. After the first person view window is closed , the images stored in memory are then converted to a QuickTime movie (.MOV). The video will be...processing unit HPC high-performance computing JOGL Java implementation of OpenGL KML Keyhole Markup Language NASA National Aeronautics and Space

  7. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    Science.gov (United States)

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  8. Europe's Babylon: Towards a Single European Language? Esperanto Documents 41A.

    Science.gov (United States)

    Fettes, Mark

    Discussion of the establishment of a single language for Europe's many countries and cultures focuses on the debate over English versus Esperanto as the language of choice. It is argued that the notion that language has not been a major barrier to intellectual exchange is a myth. In addition, while the main European political institutions support…

  9. Getting Acquainted with OPML

    CERN Document Server

    Bellinger, Amy

    2006-01-01

    You've put off figuring out what Outline Processor Markup Language(OPML) is all about and what it can do, right? We'll bring you into the picture quickly with 14 wide-ranging uses forthe OPML format including: Reading lists and RSS subscription listsBloggingA wholly new sort of intranetProcess documentationInstant outlining and collaborationDistributed directories Included in this Short Cut are step-by-step how-to examples with illustrations to get you started using and remixing OPML right now. Let's get going.

  10. Harmonised information exchange between decentralised food composition database systems

    DEFF Research Database (Denmark)

    Pakkala, Heikki; Christensen, Tue; Martínez de Victoria, Ignacio

    2010-01-01

    documentation and by the use of standardised thesauri. Subjects/Methods: The data bank is implemented through a network of local FCD storages (usually national) under the control and responsibility of the local (national) EuroFIR partner. Results: The implementation of the system based on the Euro......FIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object...... Access Protocol, Web Service Description Language and Extensible Markup Language (XML). A specifically constructed EuroFIR search facility (eSearch) was designed for end users. The EuroFIR eSearch facility compiles queries using a specifically designed Food Data Query Language and sends a request...

  11. Intended and unintended consequences of China's zero markup drug policy.

    Science.gov (United States)

    Yi, Hongmei; Miller, Grant; Zhang, Linxiu; Li, Shaoping; Rozelle, Scott

    2015-08-01

    Since economic liberalization in the late 1970s, China's health care providers have grown heavily reliant on revenue from drugs, which they both prescribe and sell. To curb abuse and to promote the availability, safety, and appropriate use of essential drugs, China introduced its national essential drug list in 2009 and implemented a zero markup policy designed to decouple provider compensation from drug prescription and sales. We collected and analyzed representative data from China's township health centers and their catchment-area populations both before and after the reform. We found large reductions in drug revenue, as intended by policy makers. However, we also found a doubling of inpatient care that appeared to be driven by supply, instead of demand. Thus, the reform had an important unintended consequence: China's health care providers have sought new, potentially inappropriate, forms of revenue. Project HOPE—The People-to-People Health Foundation, Inc.

  12. Computer support for physiological cell modelling using an ontology on cell physiology.

    Science.gov (United States)

    Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda

    2006-01-01

    The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.

  13. Modeling documents with Generative Adversarial Networks

    OpenAIRE

    Glover, John

    2016-01-01

    This paper describes a method for using Generative Adversarial Networks to learn distributed representations of natural language documents. We propose a model that is based on the recently proposed Energy-Based GAN, but instead uses a Denoising Autoencoder as the discriminator network. Document representations are extracted from the hidden layer of the discriminator and evaluated both quantitatively and qualitatively.

  14. CIRQuL: Complex Information Retrieval Query Language

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Apers, Peter M.G.

    In this paper we will present a new framework for the retrieval of XML documents. We will describe the extension for existing query languages (XPath and XQuery) geared toward ranked information retrieval and full-text search in XML documents. Furthermore we will present language models for ranked

  15. A new instrument to assess physician skill at thoracic ultrasound, including pleural effusion markup.

    Science.gov (United States)

    Salamonsen, Matthew; McGrath, David; Steiler, Geoff; Ware, Robert; Colt, Henri; Fielding, David

    2013-09-01

    To reduce complications and increase success, thoracic ultrasound is recommended to guide all chest drainage procedures. Despite this, no tools currently exist to assess proceduralist training or competence. This study aims to validate an instrument to assess physician skill at performing thoracic ultrasound, including effusion markup, and examine its validity. We developed an 11-domain, 100-point assessment sheet in line with British Thoracic Society guidelines: the Ultrasound-Guided Thoracentesis Skills and Tasks Assessment Test (UGSTAT). The test was used to assess 22 participants (eight novices, seven intermediates, seven advanced) on two occasions while performing thoracic ultrasound on a pleural effusion phantom. Each test was scored by two blinded expert examiners. Validity was examined by assessing the ability of the test to stratify participants according to expected skill level (analysis of variance) and demonstrating test-retest and intertester reproducibility by comparison of repeated scores (mean difference [95% CI] and paired t test) and the intraclass correlation coefficient. Mean scores for the novice, intermediate, and advanced groups were 49.3, 73.0, and 91.5 respectively, which were all significantly different (P < .0001). There were no significant differences between repeated scores. Procedural training on mannequins prior to unsupervised performance on patients is rapidly becoming the standard in medical education. This study has validated the UGSTAT, which can now be used to determine the adequacy of thoracic ultrasound training prior to clinical practice. It is likely that its role could be extended to live patients, providing a way to document ongoing procedural competence.

  16. Historical documentation of events through visual language: the ...

    African Journals Online (AJOL)

    The purpose of modern painting is such that communication should take place non-verbally, using visual language. Mostly, colours without paints were made in his works, with the same (if not more) functions, interpretations and references. In his paintings, plastics, rubber and other materials were used to intimate the art ...

  17. The "New Oxford English Dictionary" Project.

    Science.gov (United States)

    Fawcett, Heather

    1993-01-01

    Describes the conversion of the 22,000-page Oxford English Dictionary to an electronic version incorporating a modified Standard Generalized Markup Language (SGML) syntax. Explains that the database designers chose structured markup because it supports users' data searching needs, allows textual components to be extracted or modified, and allows…

  18. Work orders management based on XML file in printing

    Directory of Open Access Journals (Sweden)

    Ran Peipei

    2018-01-01

    Full Text Available The Extensible Markup Language (XML technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.

  19. The Language Family Relation of Local Languages in Gorontalo Province (A Lexicostatistic Study)

    OpenAIRE

    Asna Ntelu; Dakia N Djou

    2017-01-01

    This study aims to find out the relation of language family and glottochronology of Gorontalo language and Atinggola language in Gorontalo Province. The research employed a comparative method, and the research instrument used a list of 200 basic Morris Swadesh vocabularies. The data source was from documents or gloss translation of 200 basic vocabularies and interview of two informants (speakers) of Gorontalo and Atinggola languages. Data analysis was done by using the lexicostatistic techniq...

  20. Geoinformation perspectives on innovation and economic growth

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2009-03-01

    Full Text Available driving patterns, monitored through a tracking device placed in the insured person’s vehicle. circle6 Real estate: Real estate agencies have been pioneers of incorporating multimedia into a GIS, to link photographs and video footage of properties... geobrowsers. With markup languages such as the Keyhole Markup Language (KML), geobrowsers can be customised to drape one’s geoinformation over the virtual globe, attach one’s content (eg: photographs, video or sound recordings) to locations in the virtual...

  1. Language Planning and Planned Languages: How Can Planned Languages Inform Language Planning?

    Directory of Open Access Journals (Sweden)

    Humphrey Tonkin

    2015-04-01

    Full Text Available The field of language planning (LP has largely ignored planned languages. Of classic descriptions of LP processes, only Tauli (preceded by Wüster suggests that planned languages (what Wüster calls Plansprache might bear on LP theory and practice. If LP aims "to modify the linguistic behaviour of some community for some reason," as Kaplan and Baldauf put it, creating a language de novo is little different. Language policy and planning are increasingly seen as more local and less official, and occasionally more international and cosmopolitan. Zamenhof's work on Esperanto provides extensive material, little studied, documenting the formation of the language and linking it particularly to issues of supranational LP. Defining LP decision-making, Kaplan & Baldauf begin with context and target population. Zamenhof's Esperanto came shortly before Ben-Yehuda's revived Hebrew. His target community was (mostly the world's educated elite; Ben-Yehuda's was worldwide Jewry. Both planners were driven not by linguistic interest but by sociopolitical ideology rooted in reaction to anti-Semitism and imbued with the idea of progress. Their territories had no boundaries, but were not imaginary. Function mattered as much as form (Haugen's terms, status as much as corpus. For Zamenhof, status planning involved emphasis on Esperanto's ownership by its community - a collective planning process embracing all speakers (cf. Hebrew. Corpus planning included a standardized European semantics, lexical selectivity based not simply on standardization but on representation, and the development of written, and literary, style. Esperanto was successful as linguistic system and community language, less as generally accepted lingua franca. Its terminology development and language cultivation offers a model for language revival, but Zamenhof's somewhat limited analysis of language economy left him unprepared to deal with language as power.

  2. SED-ML web tools: generate, modify and export standard-compliant simulation studies.

    Science.gov (United States)

    Bergmann, Frank T; Nickerson, David; Waltemath, Dagmar; Scharm, Martin

    2017-04-15

    The Simulation Experiment Description Markup Language (SED-ML) is a standardized format for exchanging simulation studies independently of software tools. We present the SED-ML Web Tools, an online application for creating, editing, simulating and validating SED-ML documents. The Web Tools implement all current SED-ML specifications and, thus, support complex modifications and co-simulation of models in SBML and CellML formats. Ultimately, the Web Tools lower the bar on working with SED-ML documents and help users create valid simulation descriptions. http://sysbioapps.dyndns.org/SED-ML_Web_Tools/ . fbergman@caltech.edu . © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. A general lexicographic model for a typological variety of ...

    African Journals Online (AJOL)

    eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) ...

  4. An electronic specimen collection protocol schema (eSCPS). Document architecture for specimen management and the exchange of specimen collection protocols between biobanking information systems.

    Science.gov (United States)

    Eminaga, O; Semjonow, A; Oezguer, E; Herden, J; Akbarov, I; Tok, A; Engelmann, U; Wille, S

    2014-01-01

    The integrity of collection protocols in biobanking is essential for a high-quality sample preparation process. However, there is not currently a well-defined universal method for integrating collection protocols in the biobanking information system (BIMS). Therefore, an electronic schema of the collection protocol that is based on Extensible Markup Language (XML) is required to maintain the integrity and enable the exchange of collection protocols. The development and implementation of an electronic specimen collection protocol schema (eSCPS) was performed at two institutions (Muenster and Cologne) in three stages. First, we analyzed the infrastructure that was already established at both the biorepository and the hospital information systems of these institutions and determined the requirements for the sufficient preparation of specimens and documentation. Second, we designed an eSCPS according to these requirements. Finally, a prospective study was conducted to implement and evaluate the novel schema in the current BIMS. We designed an eSCPS that provides all of the relevant information about collection protocols. Ten electronic collection protocols were generated using the supplementary Protocol Editor tool, and these protocols were successfully implemented in the existing BIMS. Moreover, an electronic list of collection protocols for the current studies being performed at each institution was included, new collection protocols were added, and the existing protocols were redesigned to be modifiable. The documentation time was significantly reduced after implementing the eSCPS (5 ± 2 min vs. 7 ± 3 min; p = 0.0002). The eSCPS improves the integrity and facilitates the exchange of specimen collection protocols in the existing open-source BIMS.

  5. Establishing language skills in Europe : The inspirations on Chinese foreign language study

    NARCIS (Netherlands)

    Broeder, P.; Fu, G.

    2009-01-01

    In order to promote transparency and coherence for language learning, teaching and especially estimate, Council of Europe(CoE)developed the Common European Framework of Reference(CEFR) and European Language Portfolio(ELP).The CEFR and the ELP are one of the most influential documents of the last

  6. Towards a Pattern Language Approach to Document Description

    Directory of Open Access Journals (Sweden)

    Robert Waller

    2012-07-01

    Full Text Available Pattern libraries, originating in architecture, are a common way to share design solutions in interaction design and software engineering. Our aim in this paper is to consider patterns as a way of describing commonly-occurring document design solutions to particular problems, from two points of view. First, we are interested in their use as exemplars for designers to follow, and second, we suggest them as a means of understanding linguistic and graphical data for their organization into corpora that will facilitate descriptive work. We discuss the use of patterns across a range of disciplines before suggesting the need to place patterns in the context of genres, with each potentially belonging to a “home genre” in which it originates and to which it makes an implicit intertextual reference intended to produce a particular reader response in the form of a reading strategy or interpretative stance. We consider some conceptual and technical issues involved in the descriptive study of patterns in naturally-occurring documents, including the challenges involved in building a document corpus.

  7. Myanmar Language Search Engine

    OpenAIRE

    Pann Yu Mon; Yoshiki Mikami

    2011-01-01

    With the enormous growth of the World Wide Web, search engines play a critical role in retrieving information from the borderless Web. Although many search engines are available for the major languages, but they are not much proficient for the less computerized languages including Myanmar. The main reason is that those search engines are not considering the specific features of those languages. A search engine which capable of searching the Web documents written in those languages is highly n...

  8. Data Display Markup Language (DDML) Handbook

    Science.gov (United States)

    2017-01-31

    purpose of this handbook is to improve the use of DDML as a standard by presenting clear guidelines and thereby eliminating any misinterpretations ...code is slightly different for internal translators than for external translators. Like external translators, special considerations must be accounted

  9. Martin Benjamin (EPFL), The Particles of Language: "The Dictionary" as elemental data for 7000 languages across time and space

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    WhiteArea lectures' twiki HERE How can we document detailed data about all the world's language in a consistent, unified source, in a way that can serve knowledge and technology needs for people and their machines around the globe? Dictionaries have historically presented selective information about words and their meanings within a language, or translation equivalents between languages, in idiosyncratic, incommensurable formats with little basis in data science. The Kamusi Project introduces a new approach, conceiving of language as a matrix of interrelated data elements. By documenting these elements within each language, and linking elements at conceptual and functional nodes across languages, Kamusi aims toward an elusive Big Data goal: "every word in every language." If successful, the results will run the gamut from preserving the human heritage embedded in endangered languages, to providing international vocabularies for students to succeed in science, to a Star Trek-...

  10. Ontology-Driven Translator Generator for Data Display Configurations

    National Research Council Canada - National Science Library

    Jones, Charles

    2004-01-01

    .... In addition, the method includes the specification of mappings between a language-specific ontology and its corresponding syntax specification, that is, either an eXtensible Markup Language (XML...

  11. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  12. Advances in oriental document analysis and recognition techniques

    CERN Document Server

    Lee, Seong-Whan

    1999-01-01

    In recent years, rapid progress has been made in computer processing of oriental languages, and the research developments in this area have resulted in tremendous changes in handwriting processing, printed oriental character recognition, document analysis and recognition, automatic input methodologies for oriental languages, etc. Advances in computer processing of oriental languages can also be seen in multimedia computing and the World Wide Web. Many of the results in those domains are presented in this book.

  13. Biomedical information retrieval across languages.

    Science.gov (United States)

    Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger

    2007-06-01

    This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.

  14. The role of business agreements in defining textbook affordability and digital materials: A document analysis

    Directory of Open Access Journals (Sweden)

    John Raible

    2015-12-01

    Full Text Available Adopting digital materials such as eTextbooks and e-coursepacks is a potential strategy to address textbook affordability in the United States. However, university business relationships with bookstore vendors implicitly structure which instructional resources are available and in what manner. In this study, a document analysis was conducted on the bookstore contracts for the universities included in the State University System of Florida. Namely, issues of textbook affordability, digital material terminology and seller exclusivity were investigated. It was found that textbook affordability was generally conceived in terms of print rental textbooks and buyback programs, and that eTextbooks were priced higher than print textbooks (25% to 30% markup. Implications and recommendations for change are shared. DOI: 10.18870/hlrc.v5i4.284

  15. Mark-up bancário, conflito distributivo e utilização da capacidade produtiva: uma macrodinâmica pós-keynesiana

    Directory of Open Access Journals (Sweden)

    Lima Gilberto Tadeu

    2003-01-01

    Full Text Available Elabora-se um modelo macrodinâmico pós-keynesiano de utilização da capacidade, distribuição e inflação por conflito, no qual a oferta de moeda de crédito é endógena. A taxa nominal de juros é determinada pela aplicação de um mark-up sobre a taxa básica fixada pela autoridade monetária. Ao longo do tempo, o mark-up bancário varia com a taxa de lucro sobre o capital físico, enquanto a taxa básica varia com excessos de demanda que não são acomodáveis pela utilização da capacidade. São analisados os casos em que a demanda é suficiente ou não para gerar a plena utilização da capacidade.

  16. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Directory of Open Access Journals (Sweden)

    Roberto Pirrone

    2008-01-01

    Full Text Available Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.

  17. 14 CFR 221.4 - English language.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false English language. 221.4 Section 221.4... REGULATIONS TARIFFS General § 221.4 English language. All tariffs and other documents and material filed with the Department pursuant to this part shall be in the English language. ...

  18. Advanced software development workstation. Engineering scripting language graphical editor: DRAFT design document

    Science.gov (United States)

    1991-01-01

    The Engineering Scripting Language (ESL) is a language designed to allow nonprogramming users to write Higher Order Language (HOL) programs by drawing directed graphs to represent the program and having the system generate the corresponding program in HOL. The ESL system supports user generation of HOL programs through the manipulation of directed graphs. The components of this graphs (nodes, ports, and connectors) are objects each of which has its own properties and property values. The purpose of the ESL graphical editor is to allow the user to create or edit graph objects which represent programs.

  19. 76 FR 24507 - HUD Multifamily Rental Project Closing Documents: Notice Announcing Final Approved Documents and...

    Science.gov (United States)

    2011-05-02

    ... transactions, although these are familiar roles in commercial lending transactions. Accordingly, HUD has... advantage of this language is that it identifies the specific, longstanding, and familiar types of... the documents, HUD is limiting its role and giving lenders more ability to address any problems that...

  20. Model-based development of robotic systems and services in construction robotics

    DEFF Research Database (Denmark)

    Schlette, Christian; Roßmann, Jürgen

    2017-01-01

    More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards for proj......More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards...

  1. XML in an Adaptive Framework for Instrument Control

    Science.gov (United States)

    Ames, Troy J.

    2004-01-01

    NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.

  2. Court Registries of Bakhchisaray Kadylyk of the 17th–19th centuries: Language and Structure of the Documents of “Qısmet-i mevaris” (“Share of the Heirs”.

    Directory of Open Access Journals (Sweden)

    O.D. Rustemov

    2016-12-01

    Full Text Available Research objective: to study the language and structure of documents of “Qısmet-i mevaris”. Research materials: Despite the publication of individual articles and even two books in one way or another thematically related to the consideration of the registries of the Crimean Sharia courts, information of philological character about these written sources is virtually absent. At the same time, a careful and detailed examination of these documents reveals a large formation of linguistic materials characterized both the linguistic situation and the era of the middle and late periods of the Crimean Khanate in general. Kadiasker books contain a significant amount of unique names, toponymic and onomastic denominations, legal terminology, the names of crafts and official titles and positions. They mention forgotten names of animals’ breeds and colors as weel as contain other linguistic material. Results and novelty of the research: Documents on the inheritance division occupy a special place in these sources and are of great interest for scholars since they contain samples of the style and structure of official documents. These documents mention the names of household items, housewares, clothing, which is especially important from the point of view of contemporary study both of the history of the language as well as the history of the Turkic clans’ settling in Crimea.

  3. The tissue microarray data exchange specification: A document type definition to validate and enhance XML data

    Science.gov (United States)

    Nohle, David G; Ayers, Leona W

    2005-01-01

    Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be

  4. Language Identification of Kannada, Hindi and English Text Words Through Visual Discriminating Features

    Directory of Open Access Journals (Sweden)

    M.C. Padma

    2008-06-01

    Full Text Available In a multilingual country like India, a document may contain text words in more than one language. For a multilingual environment, multi lingual Optical Character Recognition (OCR system is needed to read the multilingual documents. So, it is necessary to identify different language regions of the document before feeding the document to the OCRs of individual language. The objective of this paper is to propose visual clues based procedure to identify Kannada, Hindi and English text portions of the Indian multilingual document.

  5. Content Documents Management

    Science.gov (United States)

    Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.

    2011-01-01

    The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!

  6. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  7. DutchParl: A corpus of parliamentary documents in Dutch

    NARCIS (Netherlands)

    Marx, M.; Schuth, A.

    2010-01-01

    A corpus called DutchParl is created which aims to contain all digitally available parliamentary documents written in the Dutch language. The first version of DutchParl contains documents from the parliaments of The Netherlands, Flanders and Belgium. The corpus is divided along three dimensions: per

  8. LCS Content Document Application

    Science.gov (United States)

    Hochstadt, Jake

    2011-01-01

    My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.

  9. Dual Language Teachers' Use of Conventional, Environmental, and Personal Resources to Support Academic Language Development

    Science.gov (United States)

    Lucero, Audrey

    2015-01-01

    This article reports findings from a study that investigated the ways in which first-grade dual language teachers drew on various resources to instructionally support academic language development among Spanish-English emergent bilingual students. Classroom observations, semistructured interviews, and document collection were conducted over a…

  10. MXA: a customizable HDF5-based data format for multi-dimensional data sets

    International Nuclear Information System (INIS)

    Jackson, M; Simmons, J P; De Graef, M

    2010-01-01

    A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files

  11. EVALUATION OF SEMANTIC SIMILARITY FOR SENTENCES IN NATURAL LANGUAGE BY MATHEMATICAL STATISTICS METHODS

    Directory of Open Access Journals (Sweden)

    A. E. Pismak

    2016-03-01

    Full Text Available Subject of Research. The paper is focused on Wiktionary articles structural organization in the aspect of its usage as the base for semantic network. Wiktionary community references, article templates and articles markup features are analyzed. The problem of numerical estimation for semantic similarity of structural elements in Wiktionary articles is considered. Analysis of existing software for semantic similarity estimation of such elements is carried out; algorithms of their functioning are studied; their advantages and disadvantages are shown. Methods. Mathematical statistics methods were used to analyze Wiktionary articles markup features. The method of semantic similarity computing based on statistics data for compared structural elements was proposed.Main Results. We have concluded that there is no possibility for direct use of Wiktionary articles as the source for semantic network. We have proposed to find hidden similarity between article elements, and for that purpose we have developed the algorithm for calculation of confidence coefficients proving that each pair of sentences is semantically near. The research of quantitative and qualitative characteristics for the developed algorithm has shown its major performance advantage over the other existing solutions in the presence of insignificantly higher error rate. Practical Relevance. The resulting algorithm may be useful in developing tools for automatic Wiktionary articles parsing. The developed method could be used in computing of semantic similarity for short text fragments in natural language in case of algorithm performance requirements are higher than its accuracy specifications.

  12. The language factor in the media and information disseminating ...

    African Journals Online (AJOL)

    Information and knowledge that could be useful in achieving the MDGs do not reach the minority language speakers through the media and information dissemination organisations, because minority languages are excluded. Documentary analysis of language policy documents which enshrine language broadcasting and ...

  13. The Iranian Foreign Language Practitioners‟ Perspectives about Iran‟s Foreign Language Education Policy

    Directory of Open Access Journals (Sweden)

    Naser Rashidi

    2018-03-01

    Full Text Available The present study was conducted to identify the perceptions of the Iranian foreign language practitioners about Iran‟s foreign language education policy within a systemic functional linguistics approach. To this end, 8 Iranian male and female foreign language practitioners were interviewed and asked to talk about what they thought about Iran‟s foreign language policy. The findings obtained from analysing the process types and participants employed by the Iranian foreign language practitioners within a systemic functional linguistics approach point out that the FLEP document is heavily influenced by and draws on well entrenched ideological, historical, religious, economic, and political discourses. Further investigations within a systemic functional linguistics approach indicate that the Iranian teachers believed that while English is a tool for understanding cultural exchanges and transferring technological advances, achieving these goals through the teaching of English is sometimes problematic within an absolute Islamic framework. The findings obtained from a transitivity analysis for the Iranian foreign language practitioners by subjecting their responses to the questions on the interviews to a systemic functional linguistics approach are also indicative of the Iranian foreign language teachers‟ loyalty to the “the younger, the better” belief. Likewise, course content was a topic for controversy. Some of the practitioners believed that course content should be developed around a variety of topics. Whereas others asserted that the inclusion of different topics in the foreign language education policy document may increase the workload on the part of the teachers. Other issues such as culture, the Islamic ideology, and imperialism were identified as causes of different understandings among the Iranian foreign language practitioners as well.

  14. 37 CFR 3.26 - English language requirement.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false English language requirement... English language requirement. The Office will accept and record non-English language documents only if accompanied by an English translation signed by the individual making the translation. [62 FR 53202, Oct. 10...

  15. Word-length algorithm for language identification of under-resourced languages

    Directory of Open Access Journals (Sweden)

    Ali Selamat

    2016-10-01

    Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.

  16. GSFC Systems Test and Operation Language (STOL) functional requirements and language description

    Science.gov (United States)

    Desjardins, R.; Hall, G.; Mcguire, J.; Merwarth, P.; Mocarsky, W.; Truszkowski, W.; Villasenor, A.; Brosi, F.; Burch, P.; Carey, D.

    1978-01-01

    The Systems Tests and Operation Language (STOL) provides the means for user communication with payloads, applications programs, and other ground system elements. It is a systems operation language that enables an operator or user to communicate a command to a computer system. The system interprets each high level language directive from the user and performs the indicated action, such as executing a program, printing out a snapshot, or sending a payload command. This document presents the following: (1) required language features and implementation considerations; (2) basic capabilities; (3) telemetry, command, and input/output directives; (4) procedure definition and control; (5) listing, extension, and STOL nucleus capabilities.

  17. XML — an opportunity for data standards in the geosciences

    Science.gov (United States)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  18. Legal Language: What Is It and What Can We Do about It?

    Science.gov (United States)

    Charrow, Veda R.; Crandall, JoAnn

    Legal language is discussed in the context of concern about the comprehensibility of consumer documents and the trend toward simplification of the language used in these documents. Specific features of legal language and its functions within the legal community and society are identified. As a primary tool of the legal profession, legal language…

  19. The language of football

    DEFF Research Database (Denmark)

    Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard

    2014-01-01

    levels (Schein, 2004) in which each player and his actions can be considered an artefact - a concrete symbol in motion embedded in espoused values and basic assumptions. Therefore, the actions of each dialect are strongly connected to the underlying understanding of football. By document and video......The language of football: A cultural analysis of selected World Cup nations. This essay describes how actions on the football field relate to the nations’ different cultural understanding of football and how these actions become spoken dialects within a language of football. Saussure reasoned...... language to have two components: a language system and language users (Danesi, 2003). Consequently, football can be characterized as a language containing a system with specific rules of the game and users with actual choices and actions within the game. All football players can be considered language...

  20. Motivating the Documentation of the Verbal Arts: Arguments from Theory and Practice

    Science.gov (United States)

    Fitzgerald, Colleen M.

    2017-01-01

    For language documentation to be sufficiently extensive to cover a given community's language practices (cf. Himmelmann 1998), then including verbal arts is essential to ensure the richness of that comprehensive record. The verbal arts span the creative and artistic uses of a given language by speakers, such as storytelling, songs, puns and…

  1. South Africa's new African language dictionaries and their use for ...

    African Journals Online (AJOL)

    Dictionaries are useful tools for language documentation and standardization, as they try to cover and docu-ment the general vocabulary (general dictionaries) or the specialized vocabulary (technical diction-aries). They empower the language users because they help to improve communication by pro-viding users with the ...

  2. XML for data representation and model specification in neuroscience.

    Science.gov (United States)

    Crook, Sharon M; Howell, Fred W

    2007-01-01

    EXtensible Markup Language (XML) technology provides an ideal representation for the complex structure of models and neuroscience data, as it is an open file format and provides a language-independent method for storing arbitrarily complex structured information. XML is composed of text and tags that explicitly describe the structure and semantics of the content of the document. In this chapter, we describe some of the common uses of XML in neuroscience, with case studies in representing neuroscience data and defining model descriptions based on examples from NeuroML. The specific methods that we discuss include (1) reading and writing XML from applications, (2) exporting XML from databases, (3) using XML standards to represent neuronal morphology data, (4) using XML to represent experimental metadata, and (5) creating new XML specifications for models.

  3. XML technology planning database : lessons learned

    Science.gov (United States)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  4. The Ruby programming language

    CERN Document Server

    Flanagan, David

    2008-01-01

    This book begins with a quick-start tutorial to the language, and then explains the language in detail from the bottom up: from lexical and syntactic structure to datatypes to expressions and statements and on through methods, blocks, lambdas, closures, classes and modules. The book also includes a long and thorough introduction to the rich API of the Ruby platform, demonstrating -- with heavily-commented example code -- Ruby's facilities for text processing, numeric manipulation, collections, input/output, networking, and concurrency. An entire chapter is devoted to Ruby's metaprogramming capabilities. The Ruby Programming Language documents the Ruby language definitively but without the formality of a language specification. It is written for experienced programmers who are new to Ruby, and for current Ruby programmers who want to challenge their understanding and increase their mastery of the language.

  5. 19 CFR 122.4 - English language required.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false English language required. 122.4 Section 122.4... TREASURY AIR COMMERCE REGULATIONS General Definitions and Provisions § 122.4 English language required. A translation in the English language shall be attached to the original and each copy of any form or document...

  6. Linguistic Identity Positioning in Facebook Posts during Second Language Study Abroad: One Teen's Language Use, Experience, and Awareness

    Science.gov (United States)

    Dressler, Roswita; Dressler, Anja

    2016-01-01

    Teens who post on the popular social networking site Facebook in their home environment often continue to do so on second language study abroad sojourns. These sojourners use Facebook to document and make sense of their experiences in the host culture and position themselves with respect to language(s) and culture(s). This study examined one…

  7. English Language Assessment in the Colleges of Applied Sciences in Oman: Thematic Document Analysis

    Science.gov (United States)

    Al Hajri, Fatma

    2014-01-01

    Proficiency in English language and how it is measured have become central issues in higher education research as the English language is increasingly used as a medium of instruction and a criterion for admission to education. This study evaluated the English language assessment in the foundation Programme at the Colleges of Applied sciences in…

  8. Language Analysis in the Context of the Asylum Process: Procedures, Validity, and Consequences

    Science.gov (United States)

    Reath, Anne

    2004-01-01

    In 1993, the language section of the Swedish Migration Board initiated the production of documents they called "language analyses" to aid in the processing of asylum seekers. Today, 11 years later, 2 privately owned companies in Stockholm produce these documents. These companies have produced language analyses not only for the Swedish…

  9. Tracking the Progress of English Language Learners

    Science.gov (United States)

    Murphy, Audrey F.

    2009-01-01

    Educators need to document progress for English language learners, and the best structures to put into place in order to record their growth. Beginning with the stages of language proficiency, student progress can be tracked through the use of a baseline in all four language strands and the creation of rubrics to monitor performance. Language…

  10. CSchema: A Downgrading Policy Language for XML Access Control

    Institute of Scientific and Technical Information of China (English)

    Dong-Xi Liu

    2007-01-01

    The problem of regulating access to XML documents has attracted much attention from both academic and industry communities.In existing approaches, the XML elements specified by access policies are either accessible or inac-cessible according to their sensitivity.However, in some cases, the original XML elements are sensitive and inaccessible, but after being processed in some appropriate ways, the results become insensitive and thus accessible.This paper proposes a policy language to accommodate such cases, which can express the downgrading operations on sensitive data in XML documents through explicit calculations on them.The proposed policy language is called calculation-embedded schema (CSchema), which extends the ordinary schema languages with protection type for protecting sensitive data and specifying downgrading operations.CSchema language has a type system to guarantee the type correctness of the embedded calcula-tion expressions and moreover this type system also generates a security view after type checking a CSchema policy.Access policies specified by CSchema are enforced by a validation procedure, which produces the released documents containing only the accessible data by validating the protected documents against CSchema policies.These released documents are then ready tobe accessed by, for instance, XML query engines.By incorporating this validation procedure, other XML processing technologies can use CSchema as the access control module.

  11. Discursive Mechanisms and Human Agency in Language Policy Formation: Negotiating Bilingualism and Parallel Language Use at a Swedish University

    Science.gov (United States)

    Källkvist, Marie; Hult, Francis M.

    2016-01-01

    In the wake of the enactment of Sweden's Language Act in 2009 and in the face of the growing presence of English, Swedish universities have been called upon by the Swedish Higher Education Authority to craft their own language policy documents. This study focuses on the discursive negotiation of institutional bilingualism by a language policy…

  12. Transitioning Existing Content: inferring organisation-specific documents

    Directory of Open Access Journals (Sweden)

    Arijit Sengupta

    2000-11-01

    Full Text Available A definition for a document type within an organization represents an organizational norm about the way the organizational actors represent products and supporting evidence of organizational processes. Generating a good organization-specific document structure is, therefore, important since it can capture a shared understanding among the organizational actors about how certain business processes should be performed. Current tools that generate document type definitions focus on the underlying technology, emphasizing tags created in a single instance document. The tools, thus, fall short of capturing the shared understanding between organizational actors about how a given document type should be represented. We propose a method for inferring organization-specific document structures using multiple instance documents as inputs. The method consists of heuristics that combine individual document definitions, which may have been compiled using standard algorithms. We propose a number of heuristics utilizing artificial intelligence and natural language processing techniques. As the research progresses, the heuristics will be tested on a suite of test cases representing multiple instance documents for different document types. The complete methodology will be implemented as a research prototype

  13. Design of the XML Security System for Electronic Commerce Application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The invocation of World Wide Web (www) first triggered mass adoption of the Internet for public access to digital information exchanges across the globe. To get a big market on the Web, a special security infrastructure would need to be put into place transforming the wild-and-woolly Internet into a network with end-to-end protections. XML (extensible Markup Language) is widely accepted as powerful data representation standard for electronic documents, so a security mechanism for XML documents must be provided in the first place to secure electronic commerce over Internet. In this paper the authors design and implement a secure framework that provides XML signature function, XML Element-wise Encryption function, smart card based crypto API library and Public Key Infrastructure (PKI) security functions to achieve confidentiality, integrity, message authentication, and/or signer authentication services for XML documents and existing non-XML documents that are exchanged by Internet for E-commerce application.

  14. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  15. Natural language generation in health care.

    Science.gov (United States)

    Cawsey, A J; Webber, B L; Jones, R B

    1997-01-01

    Good communication is vital in health care, both among health care professionals, and between health care professionals and their patients. And well-written documents, describing and/or explaining the information in structured databases may be easier to comprehend, more edifying, and even more convincing than the structured data, even when presented in tabular or graphic form. Documents may be automatically generated from structured data, using techniques from the field of natural language generation. These techniques are concerned with how the content, organization and language used in a document can be dynamically selected, depending on the audience and context. They have been used to generate health education materials, explanations and critiques in decision support systems, and medical reports and progress notes.

  16. STS Case Study Development Support

    Science.gov (United States)

    Rosa de Jesus, Dan A.; Johnson, Grace K.

    2013-01-01

    The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.

  17. Fuzzy markup language for malware behavioral analysis

    NARCIS (Netherlands)

    Huang, H.-D.; Acampora, G.; Loia, V.; Lee, C.-S.; Hagras, H.; Wang, M.-H.; Kao, H.-Y.; Chang, J.-G.; Acampora, G.; Loia, V.; Lee, Ch.-Sh.; Wang, M.-H.

    2013-01-01

    In recent years, antimalware applications represented one of the most important research topics in the area of network security threat. In addition, malware have become a growing important problem for governments and commercial organizations. The key point of the research on the network security is

  18. Software development without languages

    Science.gov (United States)

    Osborne, Haywood S.

    1988-01-01

    Automatic programming generally involves the construction of a formal specification; i.e., one which allows unambiguous interpretation by tools for the subsequent production of the corresponding software. Previous practical efforts in this direction have focused on the serious problems of: (1) designing the optimum specification language; and (2) mapping (translating or compiling) from this specification language to the program itself. The approach proposed bypasses the above problems. It postulates that the specification proper should be an intermediate form, with the sole function of containing information sufficient to facilitate construction of programs and also of matching documentation. Thus, the means of forming the intermediary becomes a human factors task rather than a linguistic one; human users will read documents generated from the specification, rather than the specification itself.

  19. XML Diagnostics Description Standard

    International Nuclear Information System (INIS)

    Neto, A.; Fernandes, H.; Varandas, C.; Lister, J.; Yonekawa, I.

    2006-01-01

    A standard for the self-description of fusion plasma diagnostics will be presented, based on the Extensible Markup Language (XML). The motivation is to maintain and organise the information on all the components of a laboratory experiment, from the hardware to the access security, to save time and money when problems arises. Since there is no existing standard to organise this kind of information, every Association stores and organises each experiment in different ways. This can lead to severe problems when the organisation schema is poorly documented or written in national languages. The exchange of scientists, researchers and engineers between laboratories is a common practice nowadays. Sometimes they have to install new diagnostics or to update existing ones and frequently they lose a great deal of time trying to understand the currently installed system. The most common problems are: no documentation available; the person who understands it has left; documentation written in the national language. Standardisation is the key to solving all the problems mentioned. From the commercial information on the diagnostic (component supplier; component price) to the hardware description (component specifications; drawings) to the operation of the equipment (finite state machines) through change control (who changed what and when) and internationalisation (information at least in the native language and in English), a common XML schema will be proposed. This paper will also discuss an extension of these ideas to the self-description of ITER plant systems, since the problems will be identical. (author)

  20. The Hierarchy of Minority Languages in New Zealand

    Science.gov (United States)

    de Bres, Julia

    2015-01-01

    This article makes a case for the existence of a minority language hierarchy in New Zealand. Based on an analysis of language ideologies expressed in recent policy documents and interviews with policymakers and representatives of minority language communities, it presents the arguments forwarded in support of the promotion of different types of…

  1. Speech-Language Pathology: Preparing Early Interventionists

    Science.gov (United States)

    Prelock, Patricia A.; Deppe, Janet

    2015-01-01

    The purpose of this article is to explain the role of speech-language pathology in early intervention. The expected credentials of professionals in the field are described, and the current numbers of practitioners serving young children are identified. Several resource documents available from the American Speech-­Language Hearing Association are…

  2. Foreign Language Training in the United States Peace Corps.

    Science.gov (United States)

    Kulakow, Allan

    This document reports on the foreign language training offered in the Peace Corps. Following a brief introductory statement, a list of languages taught by the Peace Corps in the years 1961-67 is provided, as well as a brief description of Peace Corps language training methods. Guidelines for language coordinators are outlined, and the approach to…

  3. PERBANDINGAN ANTARA “BIG” WEB SERVICE DENGAN RESTFUL WEB SERVICE UNTUK INTEGRASI DATA BERFORMAT GML

    Directory of Open Access Journals (Sweden)

    Adi Nugroho

    2012-01-01

    Full Text Available Web Service with Java: SOAP (JAX-WS/Java API for XML Web Services and Java RESTful Web Service (JAX-RS/Java RESTful API for XML Web Services are now a technology competing with each other in terms of their use for integrates data residing in different systems. Both Web Service technologies, of course, have advantages and disadvantages. In this paper, we discuss the comparison of the two technologies is a Java Web Service in relation to the development of GIS application (Geographic Information System integrates the use of data-formatted GML (Geography Markup Language, which is stored in the system database XML (eXtensible Markup Language.

  4. Implementing ATML in Distributed ATS for SG-III Prototype

    International Nuclear Information System (INIS)

    Chen Ming; Yang Cunbang; Lu Junfeng; Ding Yongkun; Yin Zejie; Zheng Zhijian

    2007-01-01

    With the forthcoming large-scale scientific experimental systems, we are looking for ways to construct an open, distributed architecture within the new and the existing automatic test systems. The new standard of Automatic Test Markup Language meets our demand for data exchange for this architecture through defining the test routines and resultant data in the XML format. This paper introduces the concept of ATML(Automatic Test Markup Language) and related standards, and the significance of these new standards for a distributed automatic test system. It also describes the implementation of ATML through the integration of this technology among the existing and new test systems

  5. Teach Yourself VISUALLY HTML5

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    Make mark-up language more manageable with this visual guide HTML5 is the next-generation of web standard mark-up language, and among other things, it offers amazing new avenues for incorporating multimedia into your sites. What easier way to master all of HTML5's new bells and whistles than with a guide that shows you, screenshot by screenshot, just what to do? Over a hundred tasks that web designers need to know most are explained using, full-color screenshots and how-to steps. From the easy stuff like revised new header and footer elements to complex updates such as canvas and audio, this

  6. Language Problems and the Final Act. Esperanto Documents, New Series No. 11A.

    Science.gov (United States)

    Universal Esperanto Association, Rotterdam (Netherlands).

    The Final Act of the Conference on Security and Co-operation in Europe, linguistic problems in the way of cooperation, language differences and the potential for discriminatory practice, and the need for a new linguistic order are discussed. It is suggested that misunderstandings arising from differences of language reduce the ability of the 35…

  7. The inclusion of an online journal in PubMed central - a difficult path.

    Science.gov (United States)

    Grech, Victor

    2016-01-01

    The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.

  8. The Language Problem in Tourism. Esperanto Documents No. 34A.

    Science.gov (United States)

    Universal Esperanto Association, Rotterdam (Netherlands).

    Esperanto can remove the language problem from tourism and bring major moral, economic, and practical advantages for all of humankind. Officials of national and local tourism organizations and companies should encourage its application in various ways, for example, by using it for: publication of publicity materials destined for use abroad; public…

  9. Multilingual Federated Searching Across Heterogeneous Collections.

    Science.gov (United States)

    Powell, James; Fox, Edward A.

    1998-01-01

    Describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. Details Searchable Database Markup Language (SearchDB-ML) for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages. (Author)

  10. Use of Francophone Tales in Developing Language Competences

    Directory of Open Access Journals (Sweden)

    Nataša Žugelj

    2015-12-01

    Full Text Available Traditional folktales as an authentic document belong to a literary genre which can be of great use in enhancing foreign language learning. When accompanied by diverse and fun activities, they can con- vert a foreign language learning into a very positive experience for different age groups. Folktales with language exercises for developing different language skills can be a great source for language analysis, vocabulary building and better expression in a foreign language. Its restricted length and its identifiable content make folktales user-friendly for teaching.

  11. Document retrieval on repetitive string collections.

    Science.gov (United States)

    Gagie, Travis; Hartikainen, Aleksi; Karhu, Kalle; Kärkkäinen, Juha; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni

    2017-01-01

    Most of the fastest-growing string collections today are repetitive, that is, most of the constituent documents are similar to many others. As these collections keep growing, a key approach to handling them is to exploit their repetitiveness, which can reduce their space usage by orders of magnitude. We study the problem of indexing repetitive string collections in order to perform efficient document retrieval operations on them. Document retrieval problems are routinely solved by search engines on large natural language collections, but the techniques are less developed on generic string collections. The case of repetitive string collections is even less understood, and there are very few existing solutions. We develop two novel ideas, interleaved LCPs and precomputed document lists , that yield highly compressed indexes solving the problem of document listing (find all the documents where a string appears), top- k document retrieval (find the k documents where a string appears most often), and document counting (count the number of documents where a string appears). We also show that a classical data structure supporting the latter query becomes highly compressible on repetitive data. Finally, we show how the tools we developed can be combined to solve ranked conjunctive and disjunctive multi-term queries under the simple [Formula: see text] model of relevance. We thoroughly evaluate the resulting techniques in various real-life repetitiveness scenarios, and recommend the best choices for each case.

  12. Recommended documentation for computer users at ANL

    Energy Technology Data Exchange (ETDEWEB)

    Heiberger, A.A.

    1992-04-01

    Recommended Documentation for Computer Users at ANL is for all users of the services available from the Argonne National Laboratory (ANL) Computing and Telecommunications Division (CTD). This document will guide you in selecting available documentation that will best fill your particular needs. Chapter 1 explains how to use this document to select documents and how to obtain them from the CTD Document Distribution Counter. Chapter 2 contains a table that categorizes available publications. Chapter 3 gives descriptions of the online DOCUMENT command for CMS, and VAX, and the Sun workstation. DOCUMENT allows you to scan for and order documentation that interests you. Chapter 4 lists publications by subject. Categories I and IX cover publications of a general nature and publications on telecommunications and networks respectively. Categories II, III, IV, V, VI, VII, VIII, and X cover publications on specific computer systems. Category XI covers publications on advanced scientific computing at Argonne. Chapter 5 contains abstracts for each publication, all arranged alphabetically. Chapter 6 describes additional publications containing bibliographies and master indexes that the user may find useful. The appendix identifies available computer systems, applications, languages, and libraries.

  13. 8 CFR 1003.33 - Translation of documents.

    Science.gov (United States)

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Translation of documents. 1003.33 Section... PROVISIONS EXECUTIVE OFFICE FOR IMMIGRATION REVIEW Immigration Court-Rules of Procedure § 1003.33 Translation... by an English language translation and a certification signed by the translator that must be printed...

  14. A document preparation system in a large network environment

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.; Bouchier, S.; Sanders, C.; Sydoriak, S.; Wheeler, K.

    1988-01-01

    At Los Alamos National Laboratory, we have developed an integrated document preparation system that produces publication-quality documents. This system combines text formatters and computer graphics capabilities that have been adapted to meet the needs of users in a large scientific research laboratory. This paper describes the integration of document processing technology to develop a system architecture, based on a page description language, to provide network-wide capabilities in a distributed computing environment. We describe the Laboratory requirements, the integration and implementation issues, and the challenges we faced developing this system.

  15. Request language of the JINR information retrieval system

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Kozhenkova, Z.I.

    1975-01-01

    A classification of operating languages of information retreival systems (IRS) is given. A justification is made for the selection of the descriptor no-grammar language for coding documents and queries in the JINR IRS. A Boolean form query minimization algorithm is applied

  16. An Intranet for the Systems Management Curricular Office

    National Research Council Canada - National Science Library

    Morgan, Reece

    1997-01-01

    ...), hypertext markup language (HTML) pages, and search engines. Many organizations are now using or building intranets to distribute, collect, and share timely, consistent, and accurate information...

  17. IAEA technical documents (TECDOCs) 1992-2002. International Atomic Energy Agency publications

    International Nuclear Information System (INIS)

    2003-02-01

    This catalogue lists all technical documents (TECDOCs) of the International Atomic Energy Agency issued between 1 January 1992 and 31 December 2002. It is divided into two parts. The first part lists all documents in numerical order, starting with the most recent publication. The second part lists all documents by subject category, in alphabetical order within each category. Most publications are issued in English, although some are also available in other languages

  18. A Google Earth Grand Tour of the Terrestrial Planets

    Science.gov (United States)

    De Paor, Declan; Coba, Filis; Burgin, Stephen

    2016-01-01

    Google Earth is a powerful instructional resource for geoscience education. We have extended the virtual globe to include all terrestrial planets. Downloadable Keyhole Markup Language (KML) files (Google Earth's scripting language) associated with this paper include lessons about Mercury, Venus, the Moon, and Mars. We created "grand…

  19. The language of football

    DEFF Research Database (Denmark)

    Rossing, Niels Nygaard; Skrubbeltrang, Lotte Stausgaard

    2017-01-01

    This essay aims to describe how actions in the football field relate to the different national teams’ and countries’ cultural understanding of football and how these actions become spoken dialects within a language of football. Inspired by Edgar Schein’s framework of culture, the Brazilian...... and Italian national team football cultures were examined. The basis of the analysis was both document and video analysis. The documents were mostly research studies and popular books on the national football cultures, while the video analysis included all matches including Italy and Brazil from the World Cup...... in 2010 and 2014. The cultural analysis showed some coherence between the national football cultures and the national teams, which suggested a national dialect with the language of the game. Each national dialect seemed to be based on different basic assumptions and to some extent specific symbolic...

  20. Axiological Significance of Historical Pedagogic Expertise of Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    M. V. Bulygina

    2012-01-01

    Full Text Available The paper deals with one of the most urgent problems of language teaching in Russia – the need for raising the quality of foreign language and culture teaching in compulsory secondary school. By analyzing the current normative documents and perspectives of further development, the basic values of language teaching were highlighted, correlated with the teaching experience and achievements of the pre- revolutionary era. For the first time, the issue was discussed at the regional level of one of the remote Russian territories–to the south of the Urals. The basic research methods include the historical, logical and comparative analysis of language teaching in the pre-revolutionary era reflected in some previously unknown archives documents. The updating of the historic pedagogical experience emphasized in the paper could, on the one hand, preserve the national language teaching traditions, and on the other hand, give way to innovative regional projects facilitating the language teaching, multilingualism and multicultural trends in society. 

  1. The Language Family Relation of Local Languages in Gorontalo Province (A Lexicostatistic Study

    Directory of Open Access Journals (Sweden)

    Asna Ntelu

    2017-11-01

    Full Text Available This study aims to find out the relation of language family and glottochronology of Gorontalo language and Atinggola language in Gorontalo Province. The research employed a comparative method, and the research instrument used a list of 200 basic Morris Swadesh vocabularies. The data source was from documents or gloss translation of 200 basic vocabularies and interview of two informants (speakers of Gorontalo and Atinggola languages. Data analysis was done by using the lexicostatistic technique. The following indicators were used to determine the word family: (a identical pairs, (b the word pairs have phonemic correspondences, (c phonetic similarities, and (d a different phoneme. The results of data analysis reveal that there are 109 or 55.05% word pairs of the word family out of 200 basic vocabularies of Swadesh. The results of this study also show that the glottochronology of Gorontalo language and Atinggola language are (a Gorontalo and Atinggola languages are one single language at 1.377 + 122 years ago, (b Gorontalo and Atinggola languages are one single language at 1,449 - 1,255 years ago. This study concludes that (a the relation of the kinship of these two languages is in the family group, (b glottochronology (separation time between Gorontalo language and Atinggola language is between 1.4 to 1.2 thousand years ago or in the 12th – 14th century. Keywords: relation, kinship level, local language, Gorontalo Province, lexicostatistics study

  2. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Directory of Open Access Journals (Sweden)

    Wei Tian

    Full Text Available Universal Zero-Markup Drug Policy (UZMDP mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST.This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively.In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively, while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4% and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%.Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  3. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Science.gov (United States)

    Tian, Wei; Yuan, Jiangfan; Yang, Dong; Zhang, Lanjing

    2016-01-01

    Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  4. A Survey in Indexing and Searching XML Documents.

    Science.gov (United States)

    Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James

    2002-01-01

    Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…

  5. CLAIM (CLinical Accounting InforMation)--an XML-based data exchange standard for connecting electronic medical record systems to patient accounting systems.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2005-08-01

    With the evolving and diverse electronic medical record (EMR) systems, there appears to be an ever greater need to link EMR systems and patient accounting systems with a standardized data exchange format. To this end, the CLinical Accounting InforMation (CLAIM) data exchange standard was developed. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data among different medical institutions. CLAIM uses eXtensible Markup Language (XML) as a meta-language. The current version, 2.1, inherited the basic structure of MML 2.x and contains two modules including information related to registration, appointment, procedure and charging. CLAIM 2.1 was implemented successfully in Japan in 2001. Consequently, it was confirmed that CLAIM could be used as an effective data exchange format between EMR systems and patient accounting systems.

  6. Producing a Data Dictionary from an Extensible Markup Language (XML) Schemain the Global Force Management Data Initiative

    Science.gov (United States)

    2017-02-01

    are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of manufacturer’s...FMID and EwID fields, and mandatory fields. Cascading Style Sheets (CSS)13 are used to provide this functionality. The comments in the style definitions...explain the color specifications. <!-- colors and other details are defined in CSS styles --> < style type="text/css"> body, th, td { font-family

  7. Translating medical documents improves students' communication skills in simulated physician-patient encounters.

    Science.gov (United States)

    Bittner, Anja; Bittner, Johannes; Jonietz, Ansgar; Dybowski, Christoph; Harendza, Sigrid

    2016-02-27

    Patient-physician communication should be based on plain and simple language. Despite communication skill trainings in undergraduate medical curricula medical students and physicians are often still not aware of using medical jargon when communicating with patients. The aim of this study was to compare linguistic communication skills of undergraduate medical students who voluntarily translate medical documents into plain language with students who do not participate in this voluntary task. Fifty-nine undergraduate medical students participated in this study. Twenty-nine participants were actively involved in voluntarily translating medical documents for real patients into plain language on the online-platform https://washabich.de (WHI group) and 30 participants were not (non-WHI group). The assessment resembled a virtual consultation hour, where participants were connected via skype to six simulated patients (SPs). The SPs assessed participants' communication skills. All conversations were transcribed and assessed for communication skills and medical correctness by a blinded expert. All participants completed a self-assessment questionnaire on their communication skills. Across all raters, the WHI group was assessed significantly (p = .007) better than the non-WHI group regarding the use of plain language. The blinded expert assessed the WHI group significantly (p = .018) better regarding the use of stylistic devices of communication. The SPs would choose participants from the WHI group significantly (p = .041) more frequently as their personal physician. No significant differences between the two groups were observed with respect to the medical correctness of the consultations. Written translation of medical documents is associated with significantly more frequent use of plain language in simulated physician-patient encounters. Similar extracurricular exercises might be a useful tool for medical students to enhance their communication skills with

  8. The International Language Esperanto 1887-1987: Towards the Second Century. Esperanto Documents 39A.

    Science.gov (United States)

    Tonkin, Humphrey

    A discussion of Esperanto in the modern world outlines the rationale for the use of an international language, the role of Esperanto in promoting international communication, Esperanto-related organizations and services, and the characteristics of the language that make it useful and easy to teach. Also included are a fact sheet describing the…

  9. Observation of Spontaneous Expressive Language (OSEL): A New Measure for Spontaneous and Expressive Language of Children with Autism Spectrum Disorders and Other Communication Disorders

    Science.gov (United States)

    Kim, So Hyun; Junker, Dörte; Lord, Catherine

    2014-01-01

    A new language measure, the Observation of Spontaneous Expressive Language (OSEL), is intended to document spontaneous use of syntax, pragmatics, and semantics in 2-12-year-old children with Autism Spectrum Disorder (ASD) and other communication disorders with expressive language levels comparable to typical 2-5 year olds. Because the purpose of…

  10. Designing and Implementing a Cross-Language Information Retrieval System Using Linguistic Corpora

    Directory of Open Access Journals (Sweden)

    Amin Nezarat

    2012-03-01

    Full Text Available Information retrieval (IR is a crucial area of natural language processing (NLP and can be defined as finding documents whose content is relevant to the query need of a user. Cross-language information retrieval (CLIR refers to a kind of information retrieval in which the language of the query and that of searched document are different. In fact, it is a retrieval process where the user presents queries in one language to retrieve documents in another language. This paper tried to construct a bilingual lexicon of parallel chunks of English and Persian from two very large monolingual corpora an English-Persian parallel corpus which could be directly applied to cross-language information retrieval tasks. For this purpose, a statistical measure known as Association Score (AS was used to compute the association value between every two corresponding chunks in the corpus using a couple of complicated algorithms. Once the CLIR system was developed using this bilingual lexicon, an experiment was performed on a set of one hundred English and Persian phrases and collocations to see to what extend this system was effective in assisting the users find the most relevant and suitable equivalents of their queries in either language.

  11. Enhancing transparent fuzzy controllers through temporal concepts : an application to computer games

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.; Vitiello, A.

    2010-01-01

    In the last years, FML (Fuzzy Markup Language) is emerging as one of the most efficient and useful language to define a fuzzy control thanks to its capability of modeling Fuzzy Logic Controllers in a human-readable and hardware independent way, i.e. the so-called Transparent Fuzzy Controllers

  12. Language Universalization for Improved Information Management: The Necessity for Esperanto.

    Science.gov (United States)

    Jones, R. Kent

    1978-01-01

    Discusses problems for information management in dealing with multilingual documentation. The planned language, Esperanto, is suggested as a universal working language because of its neutrality, rational structure, clarity, and expressive power. (Author/CWM)

  13. Automatic Encoding and Language Detection in the GSDL

    Directory of Open Access Journals (Sweden)

    Otakar Pinkas

    2014-10-01

    Full Text Available Automatic detection of encoding and language of the text is part of the Greenstone Digital Library Software (GSDL for building and distributing digital collections. It is developed by the University of Waikato (New Zealand in cooperation with UNESCO. The automatic encoding and language detection in Slavic languages is difficult and it sometimes fails. The aim is to detect cases of failure. The automatic detection in the GSDL is based on n-grams method. The most frequent n-grams for Czech are presented. The whole process of automatic detection in the GSDL is described. The input documents to test collections are plain texts encoded in ISO-8859-1, ISO-8859-2 and Windows-1250. We manually evaluated the quality of automatic detection. To the causes of errors belong the improper language model predominance and the incorrect switch to Windows-1250. We carried out further tests on documents that were more complex.

  14. An integrated information retrieval and document management system

    Science.gov (United States)

    Coles, L. Stephen; Alvarez, J. Fernando; Chen, James; Chen, William; Cheung, Lai-Mei; Clancy, Susan; Wong, Alexis

    1993-01-01

    This paper describes the requirements and prototype development for an intelligent document management and information retrieval system that will be capable of handling millions of pages of text or other data. Technologies for scanning, Optical Character Recognition (OCR), magneto-optical storage, and multiplatform retrieval using a Standard Query Language (SQL) will be discussed. The semantic ambiguity inherent in the English language is somewhat compensated-for through the use of coefficients or weighting factors for partial synonyms. Such coefficients are used both for defining structured query trees for routine queries and for establishing long-term interest profiles that can be used on a regular basis to alert individual users to the presence of relevant documents that may have just arrived from an external source, such as a news wire service. Although this attempt at evidential reasoning is limited in comparison with the latest developments in AI Expert Systems technology, it has the advantage of being commercially available.

  15. Development of a Google-based search engine for data mining radiology reports.

    Science.gov (United States)

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  16. Word Spotting for Indic Documents to Facilitate Retrieval

    Science.gov (United States)

    Bhardwaj, Anurag; Setlur, Srirangaraj; Govindaraju, Venu

    With advances in the field of digitization of printed documents and several mass digitization projects underway, information retrieval and document search have emerged as key research areas. However, most of the current work in these areas is limited to English and a few oriental languages. The lack of efficient solutions for Indic scripts has hampered information extraction from a large body of documents of cultural and historical importance. This chapter presents two relevant topics in this area. First, we describe the use of a script-specific keyword spotting for Devanagari documents that makes use of domain knowledge of the script. Second, we address the needs of a digital library to provide access to a collection of documents from multiple scripts. This requires intelligent solutions which scale across different scripts. We present a script-independent keyword spotting approach for this purpose. Experimental results illustrate the efficacy of our methods.

  17. Critical Language Awareness: Curriculum 2005 meets the TRC ...

    African Journals Online (AJOL)

    This article discusses the different ways in which the relationship between language and power is conceptualised in recent curriculum documents and in the Truth and Reconciliation Commission Report. It uses the commissioners' insights that language is a form of social action and that discourses constitute our identities to ...

  18. Web-Based Collaborative Publications System: R&Tserve

    Science.gov (United States)

    Abrams, Steve

    1997-01-01

    R&Tserve is a publications system based on 'commercial, off-the-shelf' (COTS) software that provides a persistent, collaborative workspace for authors and editors to support the entire publication development process from initial submission, through iterative editing in a hierarchical approval structure, and on to 'publication' on the WWW. It requires no specific knowledge of the WWW (beyond basic use) or HyperText Markup Language (HTML). Graphics and URLs are automatically supported. The system includes a transaction archive, a comments utility, help functionality, automated graphics conversion, automated table generation, and an email-based notification system. It may be configured and administered via the WWW and can support publications ranging from single page documents to multiple-volume 'tomes'.

  19. Teotamachilizti: an analysis of the language in a Nahua sermon from colonial Guatemala

    DEFF Research Database (Denmark)

    Madajczak, Julia; Pharao Hansen, Magnus

    2016-01-01

    The article analyses the document teotamachilizti, a sermon in a Nahuan language from colonial Guatemala. It concludes that the language is a Central Nahuan language closely related to "classical Nahuatl", but with some features of an Eastern Nahuan language closely related to Pipil Nawat...

  20. Pragmatic Language Features of Mothers with the "FMR1" Premutation Are Associated with the Language Outcomes of Adolescents and Young Adults with Fragile X Syndrome

    Science.gov (United States)

    Klusek, Jessica; McGrath, Sara E.; Abbeduto, Leonard; Roberts, Jane E.

    2016-01-01

    Purpose: Pragmatic language difficulties have been documented as part of the FMR1 premutation phenotype, yet the interplay between these features in mothers and the language outcomes of their children with fragile X syndrome is unknown. This study aimed to determine whether pragmatic language difficulties in mothers with the "FMR1"…

  1. 76 FR 2677 - Request Facilities To Report Toxics Release Inventory Information Electronically or Complete Fill...

    Science.gov (United States)

    2011-01-14

    ... reflected by the generally increasing percentage of facilities that submit TRI reporting forms... ability to submit valid chemical data files from third party software using eXtensible Markup Language...

  2. Knowledge Graphs as Context Models: Improving the Detection of Cross-Language Plagiarism with Paraphrasing

    OpenAIRE

    Franco-Salvador, Marc; Gupta, Parth; Rosso, Paolo

    2013-01-01

    Cross-language plagiarism detection attempts to identify and extract automatically plagiarism among documents in different languages. Plagiarized fragments can be translated verbatim copies or may alter their structure to hide the copying, which is known as paraphrasing and is more difficult to detect. In order to improve the paraphrasing detection, we use a knowledge graph-based approach to obtain and compare context models of document fragments in different languages. Experimental results i...

  3. XBRL: Beyond Basic XML

    Science.gov (United States)

    VanLengen, Craig Alan

    2010-01-01

    The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…

  4. Using Literary Texts to Teach Grammar in Foreign Language Classroom

    Science.gov (United States)

    Atmaca, Hasan; Günday, Rifat

    2016-01-01

    Today, it is discussed that the use of literary texts in foreign language classroom as a course material isn't obligatory; but necessary due to the close relationship between language and literature. Although literary texts are accepted as authentic documents and do not have any purpose for language teaching, they are indispensable sources to be…

  5. The Language Problem in Science and the Role of the International Language Esperanto. Esperanto Documents 38 A.

    Science.gov (United States)

    Wendao, Ouyang; Sherwood, Bruce A.

    Two essays discuss the need for improved international transfer of scientific and technical information and propose the international language Esperanto for that purpose. "The Role of Esperanto" by Ouyang Wendao suggests that the burden of time and energy spent in translating scientific literature quickly and well and the difficulties of…

  6. Detailed Design Documentation, without the Pain

    Science.gov (United States)

    Ramsay, C. D.; Parkes, S.

    2004-06-01

    Producing detailed forms of design documentation, such as pseudocode and structured flowcharts, to describe the procedures of a software system:(1) allows software developers to model and discuss their understanding of a problem and the design of a solution free from the syntax of a programming language,(2) facilitates deeper involvement of non-technical stakeholders, such as the customer or project managers, whose influence ensures the quality, correctness and timeliness of the resulting system,(3) forms comprehensive documentation of the system for its future maintenance, reuse and/or redeployment.However, such forms of documentation require effort to create and maintain.This paper describes a software tool which is currently being developed within the Space Systems Research Group at the University of Dundee which aims to improve the utility of, and the incentive for, creating detailed design documentation for the procedures of a software system. The rationale for creating such a tool is briefly discussed, followed by a description of the tool itself, a summary of its perceived benefits, and plans for future work.

  7. Air Force Information Workflow Automation through Synchronized Air Power Management (SAPM)

    National Research Council Canada - National Science Library

    Benkley, Carl; Chang, Irene; Crowley, John; Oristian, Thomas

    2004-01-01

    .... Implementing Extensible Markup Language (XML) messages, web services, and workflow automation, SAPM expands existing web-based capabilities, enables machine-to-machine interfaces, and streamlines the war fighter kill chain process...

  8. Classification of health webpages as expert and non expert with a reduced set of cross-language features.

    Science.gov (United States)

    Grabar, Natalia; Krivine, Sonia; Jaulent, Marie-Christine

    2007-10-11

    Making the distinction between expert and non expert health documents can help users to select the information which is more suitable for them, according to whether they are familiar or not with medical terminology. This issue is particularly important for the information retrieval area. In our work we address this purpose through stylistic corpus analysis and the application of machine learning algorithms. Our hypothesis is that this distinction can be performed on the basis of a small number of features and that such features can be language and domain independent. The used features were acquired in source corpus (Russian language, diabetes topic) and then tested on target (French language, pneumology topic) and source corpora. These cross-language features show 90% precision and 93% recall with non expert documents in source language; and 85% precision and 74% recall with expert documents in target language.

  9. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  10. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  11. The place of SGML and HTML in building electronic patient records.

    Science.gov (United States)

    Pitty, D; Gordon, C; Reeves, P; Capey, A; Vieyra, P; Rickards, T

    1997-01-01

    The authors are concerned that, although popular, SGML (Standard Generalized Markup Language) is only one approach to capturing, storing, viewing and exchanging healthcare information and does not provide a suitable paradigm for solving most of the problems associated with paper based patient record systems. Although a discussion of the relative merits of SGML, HTML (HyperText Markup Language) may be interesting, we feel such a discussion is avoiding the real issues associated with the most appropriate way to model, represent, and store electronic patient information in order to solve healthcare problems, and therefore the medical informatics community should firstly concern itself with these issues. The paper substantiates this viewpoint and concludes with some suggestions of how progress can be made.

  12. Harry Potter in Translation: Making Language Learning Magical

    Science.gov (United States)

    Eaton, Sarah Elaine

    2012-01-01

    This guidebook for teachers documents the "Harry Potter in Translation" project undertaken at the Language Research Centre at the University of Calgary. The guide also offers 5 sample lesson plans for teachers of grades three to twelve for teaching world languages using the Harry Potter books in translation to engage students. (Contains…

  13. Diabetes mellitus: preliminary health-promotion activity based on ...

    African Journals Online (AJOL)

    2011-05-10

    May 10, 2011 ... mark-up language (XML). For the analysis of the logged. XML, an interpreter was written in the Python® programming language to convert the raw logs into tables of responses that could be analysed statistically. The pharmacy students also prepared a poster, interactive models and a bilingual English and ...

  14. The Treatment of Culture in the Foreign Language Curriculum: An Analysis of National Curriculum Documents

    Science.gov (United States)

    Lavrenteva, Evgenia; Orland-Barak, Lily

    2015-01-01

    Teaching culture in the foreign language classroom has been widely debated ever since its importance was recognized. Current research suggests that centralized "top down" curricular policies can become potential constraints to teaching culture and points to the need for adapting curricula for culture-integrated language learning. This…

  15. PML:PAGE-OM Markup Language: About PAGE-OM [

    Lifescience Database Archive (English)

    Full Text Available he Object Management Group (OMG) standardization organization, and this was approved in 2006. The latest meeting... to continue this model development was held in Tokyo in September 2007. The meeting discussed extension...ation as well as modeling experimental results for associations between genotype and phenotype. The outcome of that meeting

  16. GENIE-V3 concepts document

    International Nuclear Information System (INIS)

    Moreton-Smith, C.M.

    1990-01-01

    The GENIE data analysis program is a program used for analysis of data from the ISIS neutron scattering instruments. The current version, GENIE-V2, is now being re-written to provide a much more powerful data analysis system for the next major version of the program, GENIE-V3. The purpose of this ''Concepts Document'' is to establish a frame of reference within which to discuss the development of GENIE-V3. It does not seek to define everything which will be implemented in the new version of the GENIE program. A substantial amount of design effort has been expended to produce a plausible design for the language and operation of GENIE-V3. Having said this, this is in no way a complete specification. Several features (although intended for any working version of GENIE-V3) have not been documented here, hopefully though, nothing material has been left out. To keep this document to a reasonable length, commands which can be written using GENIE-V3 are omitted. (author)

  17. Figurative language processing in atypical populations: the ASD perspective

    Science.gov (United States)

    Vulchanova, Mila; Saldaña, David; Chahboun, Sobh; Vulchanov, Valentin

    2015-01-01

    This paper is intended to provide a critical overview of experimental and clinical research documenting problems in figurative language processing in atypical populations with a focus on the Autistic Spectrum. Research in the comprehension and processing of figurative language in autism invariably documents problems in this area. The greater paradox is that even at the higher end of the spectrum or in the cases of linguistically talented individuals with Asperger syndrome, where structural language competence is intact, problems with extended language persist. If we assume that figurative and extended uses of language essentially depend on the perception and processing of more concrete core concepts and phenomena, the commonly observed failure in atypical populations to understand figurative language remains a puzzle. Various accounts have been offered to explain this issue, ranging from linking potential failure directly to overall structural language competence (Norbury, 2005; Brock et al., 2008) to right-hemispheric involvement (Gold and Faust, 2010). We argue that the dissociation between structural language and figurative language competence in autism should be sought in more general cognitive mechanisms and traits in the autistic phenotype (e.g., in terms of weak central coherence, Vulchanova et al., 2012b), as well as failure at on-line semantic integration with increased complexity and diversity of the stimuli (Coulson and Van Petten, 2002). This perspective is even more compelling in light of similar problems in a number of conditions, including both acquired (e.g., Aphasia) and developmental disorders (Williams Syndrome). This dissociation argues against a simple continuity view of language interpretation. PMID:25741261

  18. Figurative language processing in atypical populations: The ASD perspective

    Directory of Open Access Journals (Sweden)

    Mila eVulchanova

    2015-02-01

    Full Text Available This paper is intended to provide a critical overview of experimental and clinical research documenting problems in figurative language processing in atypical populations with a focus on the Autistic Spectrum. Research in the comprehension and processing of figurative language in autism invariably documents problems in this area. The greater paradox is that even at the higher end of the spectrum or in the cases of linguistically talented individuals with Asperger syndrome, where structural language competence is intact, problems with extended language persist. If we assume that figurative and extended uses of language essentially depend on the perception and processing of more concrete core concepts and phenomena, the commonly observed failure in atypical populations to understand figurative language remains a puzzle.Various accounts have been offered to explain this issue, ranging from linking potential failure directly to overall structural language competence (Brock et al., 2008; Norbury, 2005 to right-hemispheric involvement (Gold and Faust, 2010. We argue that the dissociation between structural language and figurative language competence in autism should be sought in more general cognitive mechanisms and traits in the autistic phenotype (e.g., in terms of weak central coherence, Vulchanova et al., 2012b, as well as failure at on-line semantic integration with increased complexity and diversity of the stimuli (Coulson and van Petten, 2002. This perspective is even more compelling in light of similar problems in a number of conditions, including both acquired (e.g., Aphasia and developmental disorders (Williams Syndrome. This dissociation argues against a simple continuity view of language interpretation.

  19. Figurative language processing in atypical populations: the ASD perspective.

    Science.gov (United States)

    Vulchanova, Mila; Saldaña, David; Chahboun, Sobh; Vulchanov, Valentin

    2015-01-01

    This paper is intended to provide a critical overview of experimental and clinical research documenting problems in figurative language processing in atypical populations with a focus on the Autistic Spectrum. Research in the comprehension and processing of figurative language in autism invariably documents problems in this area. The greater paradox is that even at the higher end of the spectrum or in the cases of linguistically talented individuals with Asperger syndrome, where structural language competence is intact, problems with extended language persist. If we assume that figurative and extended uses of language essentially depend on the perception and processing of more concrete core concepts and phenomena, the commonly observed failure in atypical populations to understand figurative language remains a puzzle. Various accounts have been offered to explain this issue, ranging from linking potential failure directly to overall structural language competence (Norbury, 2005; Brock et al., 2008) to right-hemispheric involvement (Gold and Faust, 2010). We argue that the dissociation between structural language and figurative language competence in autism should be sought in more general cognitive mechanisms and traits in the autistic phenotype (e.g., in terms of weak central coherence, Vulchanova et al., 2012b), as well as failure at on-line semantic integration with increased complexity and diversity of the stimuli (Coulson and Van Petten, 2002). This perspective is even more compelling in light of similar problems in a number of conditions, including both acquired (e.g., Aphasia) and developmental disorders (Williams Syndrome). This dissociation argues against a simple continuity view of language interpretation.

  20. Webscripter: End-User Tools for Composition Ontology-Enabled Web Services

    National Research Council Canada - National Science Library

    Frank, Martin

    2005-01-01

    ... (schemes or ontologies) with respect to objects. The DARPA Agent Markup Language (DAML), through the use of ontologies, provides a very powerful way to describe objects and their relationships to other objects...

  1. Core Semantics for Public Ontologies

    National Research Council Canada - National Science Library

    Suni, Niranjan

    2005-01-01

    ... (schemas or ontologies) with respect to objects. The DARPA Agent Markup Language (DAML) through the use of ontologies provides a very powerful way to describe objects and their relationships to other objects...

  2. Factors that affect the accuracy of text-based language identification

    CSIR Research Space (South Africa)

    Botha, GR

    2007-11-01

    Full Text Available its excellent accuracy, another significant ad- vantage of the NB classifier is that new language doc- uments can simply be merged into an existing classifier by adding the n-gram statistics of these documents to the current language model...

  3. HAL/S language specification. Version IR-542

    Science.gov (United States)

    1980-01-01

    The formal HAL/S language specification is documented with particular referral to the essentials of HAL/S syntax and semantics. The language is intended to satisfy virtually all of the flight software requirements of NASA programs. To achieve this, HAL/S incorporates a wide range of features, including applications oriented data types and organizations, real time control mechanisms, and constructs for systems programming tasks.

  4. Philippine and North Bornean Languages: Issues in Description, Subgrouping, and Reconstruction

    Science.gov (United States)

    Lobel, Jason William

    2013-01-01

    The Philippines, northern Sulawesi, and northern Borneo are home to two or three hundred languages that can be described as Philippine-type. In spite of nearly five hundred years of language documentation in the Philippines, and at least a century of work in Borneo and Sulawesi, the majority of these languages remain grossly underdocumented, and…

  5. Recommended documentation for computer users at ANL. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Heiberger, A.A.

    1992-04-01

    Recommended Documentation for Computer Users at ANL is for all users of the services available from the Argonne National Laboratory (ANL) Computing and Telecommunications Division (CTD). This document will guide you in selecting available documentation that will best fill your particular needs. Chapter 1 explains how to use this document to select documents and how to obtain them from the CTD Document Distribution Counter. Chapter 2 contains a table that categorizes available publications. Chapter 3 gives descriptions of the online DOCUMENT command for CMS, and VAX, and the Sun workstation. DOCUMENT allows you to scan for and order documentation that interests you. Chapter 4 lists publications by subject. Categories I and IX cover publications of a general nature and publications on telecommunications and networks respectively. Categories II, III, IV, V, VI, VII, VIII, and X cover publications on specific computer systems. Category XI covers publications on advanced scientific computing at Argonne. Chapter 5 contains abstracts for each publication, all arranged alphabetically. Chapter 6 describes additional publications containing bibliographies and master indexes that the user may find useful. The appendix identifies available computer systems, applications, languages, and libraries.

  6. Early Writing Deficits in Preschoolers with Oral Language Difficulties

    Science.gov (United States)

    Puranik, Cynthia S.; Lonigan, Christopher J.

    2012-01-01

    The purpose of this study was to investigate whether preschool children with language impairments (LI), a group with documented reading difficulties, also experience writing difficulties. In addition, a purpose was to examine if the writing outcomes differed when children had concomitant cognitive deficits in addition to oral language problems. A…

  7. Assessment of Deafblind Access to Manual Language Systems (ADAMLS)

    Science.gov (United States)

    Blaha, Robbie; Carlson, Brad

    2007-01-01

    This document presents the Assessment of Deafblind Access to Manual Language Systems (ADAMLS), a resource for educational teams who are responsible for developing appropriate adaptations and strategies for children who are deafblind who are candidates for learning manual language systems. The assessment tool should be used for all children with a…

  8. A Retrospective View of English Language Learning Materials Produced in Slovenia from 1945 to the Present

    Directory of Open Access Journals (Sweden)

    Janez Skela

    2013-05-01

    Full Text Available Taking a historical perspective, this article documents the development of domestically produced English Language Learning (ELL materials in the period between 1945 and 2013. To this end, reference is made to milestones that marked shifts in linguistic and foreign language teaching paradigms, including aspects of Method and the underlying conception of language. The analysis will draw on aspects of Method in relation to language policy documents (i.e., curricula and the course books in which these principles are embodied. Through the analysis of these factors we trace the evolution from Grammar–Translation methodology to Communicative Language Teaching in locally produced textbooks which are representative of various historical periods.

  9. Linguistic Theory and the Study of Legal and Bureaucratic Language. Document Design Project, Technical Report No. 16.

    Science.gov (United States)

    Charrow, Veda R.

    This paper studies legal language from three perspectives. First, legal language is defined as the variety of English that lawyers, judges, and other members of the legal community use in the course of their work. In a second section, it reviews descriptions of legal language by lawyers, linguists, and social scientists. These studies indicate…

  10. Avez-vous vu la meme chose que les Francais? Stereotypes et documents authentiques video. (Did You See the Same Thing as the French? Stereotypes and Authentic Video Documents).

    Science.gov (United States)

    Wilczynska, Weronika

    1990-01-01

    This discussion treats cultural stereotypes as a significant issue in second-language instruction, and examines the role of video documents from the mass media in helping students perceive and understand such stereotypes. (MSE)

  11. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  12. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  13. A call for the adoption of nuclear utility information standards

    International Nuclear Information System (INIS)

    Slone, B.J. III; Richardson, C.E.

    1993-01-01

    In December 1986, the International Organization for Standardization (SO) issued a standard for document representation, ISO 8879, open-quotes Standard Generalized Markup Languageclose quotes (SGML). The standard prescribes a method for defining documents in two parts, one containing text and the other describing its structure without reference to word processing or publishing system software or hardware. It provides a method for open-quotes marking upclose quotes information, a way to place open-quotes tagsclose quotes on pieces of information to define their purpose. The SGML is part of a group of ISO standards titled open-quotes Information Processing-Text and Office Systems.close quotes This group contains the Document Style Specification and Semantics Language, the Standard Document Interchange Format, the Standard Page Description Language, and the fonts standard. The Department of Defense (DoD) has taken a standards initiative to improve the efficiency and reliability of systems by mandating that DoD suppliers comply with specific standards when delivering technical information. The initiative is called the Computer-aided Acquisition and Logistics Support (CALS) and includes a package of standards to address engineering drawings, raster font images, vector illustrations, text, and magnetic tape format. SGML is the CALS standard for documentation representation. Other industries have adopted this standard. Several industry groups, such as the Air Transport Association, American Association of Publishers, and the Telecommunications Industry Forum, are moving ahead vigorously with their own applications of SGML. The Institute of Electrical and Electronics Engineers has also adopted the ISO standard for production of its documents. Work is under way to develop their own SGML application

  14. Language disturbances from mesencephalo-thalamic infarcts

    International Nuclear Information System (INIS)

    Lazzarino, L.G.; Nicolai, A.; Valassi, F.; Biasizzo, E.

    1991-01-01

    The authors report the cases of two patients with CT-documented paramedian mesencephalo-thalamic infarcts, showing language disturbances. The first patient showed a non fluent, transcortical motor-like aphasia, the other had a fluent but severely paraphasic language disorder. The CT study disclosed that it was the dorso-median thalamic nucleus that was mostly involved in both cases. These findings agree with a few previous pathological studies suggesting that the paramedian thalamic nuclei, particlularly the dorso-median nucleus may play some role in language disturbances. However the anatomical basis for thalamic aphasia remains speculative, taking into account the importantce of cortical connections in the origin of subcortical neuropsychological disturbances. (orig.)

  15. Download this PDF file

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... embedded, extended with sensors or automated visual inspection. Keywords: ... influential for mobile as well as for manufacturing oriented. Robotic .... integration with web services and Internet of Things a markup language ...

  16. Portfolio langagier en francais (Language Portfolios in French).

    Science.gov (United States)

    Laplante, Bernard; Christiansen, Helen

    2001-01-01

    Suggests that first-year college students learning French should create a language portfolio that contains documents that illustrate what they have learned in French, along with a brief statement of what linguistic skill the document demonstrates. The goal of the portfolio is to make students more aware of their own learning, their strengths, and…

  17. Perspectives on Linguistic Documentation from Sociolinguistic Research on Dialects

    Science.gov (United States)

    Tagliamonte, Sali A.

    2017-01-01

    The goal of the paper is to demonstrate how sociolinguistic research can be applied to endangered language documentation field linguistics. It first provides an overview of the techniques and practices of sociolinguistic fieldwork and the ensuring corpus compilation methods. The discussion is framed with examples from research projects focused on…

  18. Nassi-Schneiderman Diagram in HTML Based on AML

    Science.gov (United States)

    Menyhárt, László

    2013-01-01

    In an earlier work I defined an extension of XML called Algorithm Markup Language (AML) for easy and understandable coding in an IDE which supports XML editing (e.g. NetBeans). The AML extension contains annotations and native language (English or Hungarian) tag names used when coding our algorithm. This paper presents a drawing tool with which…

  19. Adding XML to the MIS Curriculum: Lessons from the Classroom

    Science.gov (United States)

    Wagner, William P.; Pant, Vik; Hilken, Ralph

    2008-01-01

    eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…

  20. Microcomputer Software Engineering, Documentation and Evaluation

    Science.gov (United States)

    1981-03-31

    local dealer or call for complete specificalons. eAUTOMATIC INC To proceed step by step, we need toUe G T A TOMA IC NC. know where we are going and a...MICROPROCESSOR normal sequence that should be DIRECT MEMORY ACCESS preserved in the documentation. For INTRODUCTION 2.2 DRIVE CONTROLS example, you...with linear, sequential logic (like a computer). It is also the verbal side and controls language. The right side specializes in images, music, pictures

  1. Searching for coherence in language teaching: the issue of teaching competencies

    OpenAIRE

    Carlos Rico Troncoso

    2011-01-01

    This document is an attempt to show some theoretical issues teachers should take into account when adopting the commitment of teaching languages. Many things haven been said about teaching languages, but there has not been any systematic reflection about teaching a foreign language in our context. Our foreign language history has shown that Colombian teachers implement many things in their classrooms without realizing the impact of those implementations in the theoretical and practical field....

  2. Dichotic listening performance predicts language comprehension.

    Science.gov (United States)

    Asbjørnsen, Arve E; Helland, Turid

    2006-05-01

    Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.

  3. Lexicon Optimization for Dutch Speech Recognition in Spoken Document Retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  4. Lexicon optimization for Dutch speech recognition in spoken document retrieval

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.; van Hessen, Adrianus J.; de Jong, Franciska M.G.; Dalsgaard, P.; Lindberg, B.; Benner, H.

    2001-01-01

    In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage

  5. CSS Preprocessing: Tools and Automation Techniques

    Directory of Open Access Journals (Sweden)

    Ricardo Queirós

    2018-01-01

    Full Text Available Cascading Style Sheets (CSS is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow.

  6. Rancang Bangun Document Management System Untuk Mengelola Dokumen Standart Operational Procedure

    Directory of Open Access Journals (Sweden)

    I Putu Susila Handika

    2017-09-01

    Standard Operational Procedure (SOP is one important document in a company because it is useful to improve the quality of the company. PT. Global Retailindo Pratama is one of the companies engaged in retail that using standard of quality management ISO 9001: 2008. Currently the management of SOP documents at PT. Global Retailindo Pratama still use manual way. Manual way cause some problems such as the search process and document distribution process takes quite a long time. This research aims to design and build Document Management System to manage SOP documents. The system development model used in this research is prototyping model. This application is built in web-based with PHP as programming language. Testing the application using Blak Box Testing and Usability Testing shows that the Document Management System can run in accordance with the needs and can be used easily so that the process of document management SOP becomes faster. Keywords: Document Management System, Standart Operational Procedure, Information System, PHP.

  7. kenyan indigenous languages in education: a world of potential ...

    African Journals Online (AJOL)

    Despite these well documented findings on the benefits of using the learner's mother tongue as a language of instruction, the debate on the language of instruction has persisted not just in Kenya but in several African countries. In Kenya, English is used as a medium of instruction right from nursery school, or in some ...

  8. Speech and Language Therapy Intervention in Schizophrenia: A Case Study

    Science.gov (United States)

    Clegg, Judy; Brumfitt, Shelagh; Parks, Randolph W.; Woodruff, Peter W. R.

    2007-01-01

    Background: There is a significant body of evidence documenting the speech and language abnormalities found in adult psychiatric disorders. These speech and language impairments can create additional social barriers for the individual and may hinder effective communication in psychiatric treatment and management. However, the role of speech and…

  9. Lexical processing and organization in bilingual first language acquisition: Guiding future research.

    Science.gov (United States)

    DeAnda, Stephanie; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2016-06-01

    A rich body of work in adult bilinguals documents an interconnected lexical network across languages, such that early word retrieval is language independent. This literature has yielded a number of influential models of bilingual semantic memory. However, extant models provide limited predictions about the emergence of lexical organization in bilingual first language acquisition (BFLA). Empirical evidence from monolingual infants suggests that lexical networks emerge early in development as children integrate phonological and semantic information. These findings tell us little about the interaction between 2 languages in early bilingual memory. To date, an understanding of when and how languages interact in early bilingual development is lacking. In this literature review, we present research documenting lexical-semantic development across monolingual and bilingual infants. This is followed by a discussion of current models of bilingual language representation and organization and their ability to account for the available empirical evidence. Together, these theoretical and empirical accounts inform and highlight unexplored areas of research and guide future work on early bilingual memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Impacts of Retailers’ Pricing Strategies for Produce Commodities on Farmer Welfare

    OpenAIRE

    Li, Chenguang; Sexton, Richard J.

    2009-01-01

    The typical model of retail pricing for produce products assumes retailers set price equal to the farm price plus a certain markup. However, observations from scanner data indicate a large degree of price dispersion in the grocery retailing market. In addition to markup pricing behavior, we document three alternative leading pricing patterns: fixed (constant) pricing, periodic sale, and high-low pricing. Retail price variations under these alternative pricing regimes in general have little co...

  11. The Importance of Literacy in the Home Language

    Directory of Open Access Journals (Sweden)

    Susana A. Eisenchlas

    2013-10-01

    Full Text Available While advantages of literacy in the home language have been widely documented, the Australian education system has not been proactive in providing institutional support for its development. This paper investigates the impact of (illiteracy in the home language on the academic, affective, and social development of bilingual/multilingual children and proposes principles that home-language-literacy programs should meet to be effective. It discusses programs that, although designed to develop literacy or second-language proficiency mainly in classroom contexts, could be easily adapted to address the needs of the linguistically and culturally diverse Australian context. We argue that the cost of not investing in successful home-language-literacy programs will be higher in the long run than their implementation costs and recommend that Australia should consider supporting grassroots home-language-literacy programs in a push to improve overall literacy outcomes for Australian home-language speakers.

  12. Multi-font printed Mongolian document recognition system

    Science.gov (United States)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  13. From Oral Ceremony to Written Document: The Transitional Language of Anglo-Saxon Wills.

    Science.gov (United States)

    Danet, Brenda; Bogoch, Bryna

    1992-01-01

    Presents theoretical discussion of the emergence of linguistic features of documents that indicate society is moving toward a view of writing as a form of constitutive social action and of written documents as autonomous material objects having a life of their own. Linguistic features of Anglo-Saxon wills are shown to differ from those of modern…

  14. Linguistic Aspects of Legal Language.

    Science.gov (United States)

    Crandall, JoAnn; Charrow, Veda R.

    Efforts to simplify language used in consumer documents come from the consumer movement and a public disillusioned with big business and government. Even before President Carter's 1978 executive order mandating simplification in government regulations, some agencies were revising regulations for clarity. However, these efforts were based on too…

  15. Dual Syntax for XML Languages

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Schwartzbach, Michael Ignatieff

    2008-01-01

    of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema....

  16. Automated Generation of Technical Documentation and Provenance for Reproducible Research

    Science.gov (United States)

    Jolly, B.; Medyckyj-Scott, D.; Spiekermann, R.; Ausseil, A. G.

    2017-12-01

    Data provenance and detailed technical documentation are essential components of high-quality reproducible research, however are often only partially addressed during a research project. Recording and maintaining this information during the course of a project can be a difficult task to get right as it is a time consuming and often boring process for the researchers involved. As a result, provenance records and technical documentation provided alongside research results can be incomplete or may not be completely consistent with the actual processes followed. While providing access to the data and code used by the original researchers goes some way toward enabling reproducibility, this does not count as, or replace, data provenance. Additionally, this can be a poor substitute for good technical documentation and is often more difficult for a third-party to understand - particularly if they do not understand the programming language(s) used. We present and discuss a tool built from the ground up for the production of well-documented and reproducible spatial datasets that are created by applying a series of classification rules to a number of input layers. The internal model of the classification rules required by the tool to process the input data is exploited to also produce technical documentation and provenance records with minimal additional user input. Available provenance records that accompany input datasets are incorporated into those that describe the current process. As a result, each time a new iteration of the analysis is performed the documentation and provenance records are re-generated to provide an accurate description of the exact process followed. The generic nature of this tool, and the lessons learned during its creation, have wider application to other fields where the production of derivative datasets must be done in an open, defensible, and reproducible way.

  17. Students' Perceptions of Bilingualism in Spanish and Mandarin Dual Language Programs

    Science.gov (United States)

    Lindholm-Leary, Kathryn

    2016-01-01

    Considerable research documents students' outcomes in dual language (DL) programs, but there is little examination of students' perceptions of bilingualism and its impact on students' cognitive functioning and social relationships, especially with comparative studies across different target languages and student backgrounds. This study, which…

  18. Adobe acrobat: an alternative electronic teaching file construction methodology independent of HTML restrictions.

    Science.gov (United States)

    Katzman, G L

    2001-03-01

    The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.

  19. The Big Bang - XML expanding the information universe

    International Nuclear Information System (INIS)

    Rutt, S.; Chamberlain, M.; Buckley, G.

    2004-01-01

    The XML language is discussed as a tool in the information management. Industries are adopting XML as a means of making disparate systems talk with each other or as a means of swapping information between different organisations and different operating systems by using a common set of mark-up. More important to this discussion is the ability to use XML within the field of Technical Documentation and Publication. The capabilities of XML in work with different types of documents are presented. In conclusion, a summary is given of the benefits of using an XML solution: Precisely match your requirements at no more initial cost; Single Source Dynamic Content Delivery and Management; 100% of authors time is spent creating content; Content is no longer locked into its format; Reduced hardware and data storage requirements; Content survives the publishing lifecycle; Auto-versioning/release management control; Workflows can be mapped and electronic audit trails made

  20. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  1. Heritage language and linguistic theory

    Science.gov (United States)

    Scontras, Gregory; Fuchs, Zuzanna; Polinsky, Maria

    2015-01-01

    This paper discusses a common reality in many cases of multilingualism: heritage speakers, or unbalanced bilinguals, simultaneous or sequential, who shifted early in childhood from one language (their heritage language) to their dominant language (the language of their speech community). To demonstrate the relevance of heritage linguistics to the study of linguistic competence more broadly defined, we present a series of case studies on heritage linguistics, documenting some of the deficits and abilities typical of heritage speakers, together with the broader theoretical questions they inform. We consider the reorganization of morphosyntactic feature systems, the reanalysis of atypical argument structure, the attrition of the syntax of relativization, and the simplification of scope interpretations; these phenomena implicate diverging trajectories and outcomes in the development of heritage speakers. The case studies also have practical and methodological implications for the study of multilingualism. We conclude by discussing more general concepts central to linguistic inquiry, in particular, complexity and native speaker competence. PMID:26500595

  2. Bioacoustics of human whistled languages: an alternative approach to the cognitive processes of language

    Directory of Open Access Journals (Sweden)

    Meyer Julien

    2004-01-01

    Full Text Available Whistled languages are a valuable heritage of human culture. This paper gives a first survey about a new multidisciplinary approach to these languages. Previous studies on whistled equivalents of languages have already documented that they can provide significant information about the role of rhythm and melody in language. To substantiate this, most whistles are represented by modulations of frequency, centered around 2000 Hz (±1000 Hz and often reach a loudness of about 130 dB (measured at 1m from the source. Their transmission range can reach up to 10 km (as verified in La Gomera, Canary Island, and the messages can remain understandable, even if the signal is deteriorated. In some cultures the use of whistled language is associated with some "talking musical instruments" (e.g. flutes, guitars, harps, gongs, drums, khens. Finally, whistles as a means of conveying information have some analogues in the animal kingdom (e.g. some birds, cetaceans, primates, providing opportunities to compare the acoustic characteristics of the respective signals. With such properties as a reference, the project reported here has two major tasks: to further elucidate the many facets of whistled language and, above all, help to immediately stop the process of its gradual disappearance.

  3. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    Science.gov (United States)

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.

  4. Legislative drafting guidelines: How different are they from controlled language rules for technical writing?

    OpenAIRE

    Höfler Stefan

    2012-01-01

    While human-oriented controlled languages developed and applied in the domain of technical documentation have received considerable attention, language control exerted in the process of legislative drafting has, until recently, gone relatively unnoticed by the controlled language community. This paper considers existing legislative drafting guidelines from the perspective of controlled language. It presents the results of a qualitative comparison of the rule sets of four German-language legis...

  5. XML Flight/Ground Data Dictionary Management

    Science.gov (United States)

    Wright, Jesse; Wiklow, Colette

    2007-01-01

    A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.

  6. Requirements for the data transfer during the examination of design documentation

    Directory of Open Access Journals (Sweden)

    Karakozova Irina

    2017-01-01

    Full Text Available When you transfer the design documents to the examination office, number of incompatible electronic documents increases dramatically. The article discusses the way to solve the problem of transferring of the text and graphic data of design documentation for state and non-state expertise, as well as verification of estimates and requirement management. The methods for the recognition of the system elements and requirements for the transferring of text and graphic design documents are provided. The need to use the classification and coding of various elements of information systems (structures, objects, resources, requirements, contracts, etc. in data transferring systems is indicated separately. The authors have developed a sequence of document processing and transmission of data during the examination, and propose a language for describing the construction of the facility, taking into account the classification criteria of the structures and construction works.

  7. [Language Competence and Behavioural Problems in Preschool].

    Science.gov (United States)

    Rißling, J K; Melzer, J; Menke, B; Petermann, F; Daseking, M

    2015-10-01

    Children with language disorders are at increased risk of developing behavioural and emotional problems. The analysis focused on the question whether behavioural problems differ depending on the type of language deficit. The present study examines the behaviour of preschool children with different language impairments. The results of N=540 children aged between 4;0 and 5;11 years were analyzed. Language impairments were classified into phonetics/phonology (n=44), vocabulary (n=44), grammar (n=58), pragmatics (n=26) and multiple language impairments (n=171). In addition, a distinction was made between deficits in language production and comprehension. The children were compared with an unimpaired control group (n=197). The extent of emotional and behavioural problems were analyzed. The results indicate that emotional and behavioural problems differ depending on the type of language deficit already in preschoolers. Especially deficits in language comprehension, pragmatic impairments and multiple language impairments increase the risk of behavioural and emotional problems and hyperactivity. The relationship between language skills and emotional and behavioural problems should be emphasized in the developmental observation and documentation in preschool. In particular, the distinction between deficits in pragmatics and behavioural problems requires a differentiated examination to ensure an optimal intervention. © Georg Thieme Verlag KG Stuttgart · New York.

  8. THE NORMALIZATION OF FINANCIAL DATA EXCHANGE OVER THE INTERNET: ADOPTING INTERNATIONAL STANDARD XBRL

    Directory of Open Access Journals (Sweden)

    Catalin Georgel Tudor

    2009-05-01

    Full Text Available The development of a common syntax for EDI (Electronic Data Interchange, XML (eXtensible Markup Language, opened new formalization perspectives for interorganizational data exchanges over the Internet. Many of the organizations involved in the normaliza

  9. The XML approach to implementing space link extension service management

    Science.gov (United States)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  10. English Language, Linguistics and Literature. : Selected Readings of Classical Writings for Linguistic Theory, Literature History, and Applications of the English Language.

    OpenAIRE

    Haase, Fee

    2009-01-01

    This collection contains selected readings of Ccassical writings for linguistic theory, literature history, and applications of the English language in documents from the early beginnings to the 20th century.

  11. Filling the Void: Community Spanish Language Programs in Los Angeles Serving to Preserve the Language

    Science.gov (United States)

    Carreira, Maria M.; Rodriguez, Rey M.

    2011-01-01

    An extensive body of research documents the successes of immigrant groups in establishing community language schools. Studied within this tradition, Latino immigrant communities appear to come up short, because of the scarcity of such schools for Spanish-speaking children. However, as we show in this paper, Latino immigrant communities do have…

  12. School Leadership for Dual Language Education: A Social Justice Approach

    Science.gov (United States)

    DeMatthews, David; Izquierdo, Elena

    2016-01-01

    This article examines how a dual language program can be developed within the framework of social justice leadership. The authors analyzed principal, teacher, and parent interview transcripts as well as field notes and key documents to understand the role of school leadership in creating inclusive dual language programs to close the Latina/o-White…

  13. The Effect of Language Ability on Chinese Immigrants’ Earning in Hong Kong

    OpenAIRE

    Chi Man Ng

    2015-01-01

    After the handover of Hong Kong sovereignty to China in 1997, the language importance gap between English and Putonghua in Hong Kong has been narrowing, even English language is remain an international language and being adopted in legal documents, but foreign investors cannot avoid speaking Putonghua when doing business with Chinese enterprises, these language importance changes provide a new discourse to human capital theorists.  In Hong Kong, natives are desire to be proficient in Putonghu...

  14. EquiX-A Search and Query Language for XML.

    Science.gov (United States)

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  15. Linguistic Reception of Latin American Students in Catalonia and Their Responses to Educational Language Policies

    Science.gov (United States)

    Newman, Michael; Patino-Santos, Adriana; Trenchs-Parera, Mireia

    2013-01-01

    This study explores the connections between language policy implementation in three Barcelona-area secondary schools and the language attitudes and behaviors of Spanish-speaking Latin American newcomers. Data were collected through interviews and ethnographic participant observation document indexes of different forms of language socialization…

  16. The Number of Scholarly Documents on the Public Web

    Science.gov (United States)

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  17. The number of scholarly documents on the public web.

    Directory of Open Access Journals (Sweden)

    Madian Khabsa

    Full Text Available The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24% are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  18. The number of scholarly documents on the public web.

    Science.gov (United States)

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  19. Graphical abstract - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ML (Cell System Markup Language), see also the CSML website. CSML files may be graphically viewed on Cell Il.... For these pieces of software, see also the Cell Illustrator website or the Cell Illustrator Online web

  20. "Canaries in the Coal Mine": The Reframing of Biculturalism and Non-Maori Participation in Maori Language Learning

    Science.gov (United States)

    Lourie, Megan

    2011-01-01

    Maori language education policy documents reflect an underlying ambivalence about the desired outcomes for non-Maori learners participating in "as-a-subject" Maori language learning. The view of the Maori language as a national language may be in the process of being replaced by a view that identifies the language primarily as a cultural…

  1. Modern Languages and Distance Education: Thirteen Days in the Cloud

    Science.gov (United States)

    Dona, Elfe; Stover, Sheri; Broughton, Nancy

    2014-01-01

    This research study documents the journey of two modern language faculty (Spanish and German) from their original beliefs that teaching foreign languages can only be conducted in a face-to-face format to their eventual development of an online class using Web 2.0 technologies to encourage their students' active skills of reading and speaking in…

  2. SEL/Project Language. Level II, Kindergarten, Volume I (Lessons 1-16).

    Science.gov (United States)

    Valladares, Ann E.; And Others

    The document is an intervention curriculum guide designed to facilitate the initial adjustment of disadvantaged Southeastern children to kindergarten or first grade. The major emphasis is on the teaching of language skills in combination with subject matter learning using a language-experience approach. This volume contains Lessons 1-16 of a…

  3. Cross-Language Plagiarism Detection System Using Latent Semantic Analysis and Learning Vector Quantization

    Directory of Open Access Journals (Sweden)

    Anak Agung Putri Ratna

    2017-06-01

    Full Text Available Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising. Due to the syntax disparity between Bahasa Indonesia and English, most of the existing methods for automated cross-language plagiarism detection do not provide satisfactory results. This paper analyses the probability of developing Latent Semantic Analysis (LSA for a computerized cross-language plagiarism detector for two languages with different syntax. To improve performance, various alterations in LSA are suggested. By using a linear vector quantization (LVQ classifier in the LSA and taking into account the Frobenius norm, output has reached up to 65.98% in accuracy. The results of the experiments showed that the best accuracy achieved is 87% with a document size of 6 words, and the document definition size must be kept below 10 words in order to maintain high accuracy. Additionally, based on experimental results, this paper suggests utilizing the frequency occurrence method as opposed to the binary method for the term–document matrix construction.

  4. Bibliometry of the Revista de Biología Tropical / International Journal of Tropical Biology and Conservation: document types, languages, countries, institutions, citations and article lifespan.

    Science.gov (United States)

    Monge-Nájera, Julián; Ho, Yuh-Shan

    2016-09-01

    The Revista de Biología Tropical / International Journal of Tropical Biology and Conservation, founded in 1953, publishes feature articles about tropical nature and is considered one of the leading journals in Latin America. This article analyzes document type, language, countries, institutions, citations and for the first time article lifespan, from 1976 through 2014. We analyzed 3 978 documents from the Science Citation Index Expanded. Articles comprised 88 % of the total production and had 3.7 citations on average, lower than reviews. Spanish and English articles were nearly equal in numbers and citation for English articles was only slightly higher. Costa Rica, Mexico, and the USA are the countries with more articles, and the leading institutions were Universidad de Costa Rica, Universidad Nacional, Universidad Nacional Autónoma de Mexico and Universidad de Oriente (Venezuela). The citation lifespan of articles is long, around 37 years. It is not surprising that Costa Rica, Mexico, and Venezuela lead in productivity and cooperation, because they are mostly covered by tropical ecosystems and share a common culture and a tradition of scientific cooperation. The same applies to the leading institutions, which are among the largest Spanish language universities in the neotropical region. American output can be explained by the regional presence of the Smithsonian Tropical Research Institute and the Organization for Tropical Studies. Tropical research does not have the rapid change typical of medical research, and for this reason, the impact factor misses most of citations for the Revista, which are made after the two-year window used by the Web of Science. This issue is especially damaging for the Revista because most journals that deal with tropical biology are never checked when citations are counted for by the Science Citation Index.

  5. Features based approach for indexation and representation of unstructured Arabic documents

    Directory of Open Access Journals (Sweden)

    Mohamed Salim El Bazzi

    2017-06-01

    Full Text Available The increase of textual information published in Arabic language on the internet, public libraries and administrations requires implementing effective techniques for the extraction of relevant information contained in large corpus of texts. The purpose of indexing is to create a document representation that easily find and identify the relevant information in a set of documents. However, mining textual data is becoming a complicated task, especially when taking semantic into consideration. In this paper, we will present an indexation system based on contextual representation that will take the advantage of semantic links given in a document. Our approach is based on the extraction of keyphrases. Then, each document is represented by its relevant keyphrases instead of its simple keywords. The experimental results confirms the effectiveness of our approach.

  6. X-PAT: a multiplatform patient referral data management system for small healthcare institution requirements.

    Science.gov (United States)

    Masseroli, Marco; Marchente, Mario

    2008-07-01

    We present X-PAT, a platform-independent software prototype that is able to manage patient referral multimedia data in an intranet network scenario according to the specific control procedures of a healthcare institution. It is a self-developed storage framework based on a file system, implemented in eXtensible Markup Language (XML) and PHP Hypertext Preprocessor Language, and addressed to the requirements of limited-dimension healthcare entities (small hospitals, private medical centers, outpatient clinics, and laboratories). In X-PAT, healthcare data descriptions, stored in a novel Referral Base Management System (RBMS) according to Health Level 7 Clinical Document Architecture Release 2 (CDA R2) standard, can be easily applied to the specific data and organizational procedures of a particular healthcare working environment thanks also to the use of standard clinical terminology. Managed data, centralized on a server, are structured in the RBMS schema using a flexible patient record and CDA healthcare referral document structures based on XML technology. A novel search engine allows defining and performing queries on stored data, whose rapid execution is ensured by expandable RBMS indexing structures. Healthcare personnel can interface the X-PAT system, according to applied state-of-the-art privacy and security measures, through friendly and intuitive Web pages that facilitate user acceptance.

  7. HTML 5 up and running

    CERN Document Server

    Pilgrim, Mark

    2010-01-01

    If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to

  8. Transcription of Spanish Historical Handwritten Documents with Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Emilio Granell

    2018-01-01

    Full Text Available The digitization of historical handwritten document images is important for the preservation of cultural heritage. Moreover, the transcription of text images obtained from digitization is necessary to provide efficient information access to the content of these documents. Handwritten Text Recognition (HTR has become an important research topic in the areas of image and computational language processing that allows us to obtain transcriptions from text images. State-of-the-art HTR systems are, however, far from perfect. One difficulty is that they have to cope with image noise and handwriting variability. Another difficulty is the presence of a large amount of Out-Of-Vocabulary (OOV words in ancient historical texts. A solution to this problem is to use external lexical resources, but such resources might be scarce or unavailable given the nature and the age of such documents. This work proposes a solution to avoid this limitation. It consists of associating a powerful optical recognition system that will cope with image noise and variability, with a language model based on sub-lexical units that will model OOV words. Such a language modeling approach reduces the size of the lexicon while increasing the lexicon coverage. Experiments are first conducted on the publicly available Rodrigo dataset, which contains the digitization of an ancient Spanish manuscript, with a recognizer based on Hidden Markov Models (HMMs. They show that sub-lexical units outperform word units in terms of Word Error Rate (WER, Character Error Rate (CER and OOV word accuracy rate. This approach is then applied to deep net classifiers, namely Bi-directional Long-Short Term Memory (BLSTMs and Convolutional Recurrent Neural Nets (CRNNs. Results show that CRNNs outperform HMMs and BLSTMs, reaching the lowest WER and CER for this image dataset and significantly improving OOV recognition.

  9. LDRD 149045 final report distinguishing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A.

    2010-09-01

    This LDRD 149045 final report describes work that Sandians Scott A. Mitchell, Randall Laviolette, Shawn Martin, Warren Davis, Cindy Philips and Danny Dunlavy performed in 2010. Prof. Afra Zomorodian provided insight. This was a small late-start LDRD. Several other ongoing efforts were leveraged, including the Networks Grand Challenge LDRD, and the Computational Topology CSRF project, and the some of the leveraged work is described here. We proposed a sentence mining technique that exploited both the distribution and the order of parts-of-speech (POS) in sentences in English language documents. The ultimate goal was to be able to discover 'call-to-action' framing documents hidden within a corpus of mostly expository documents, even if the documents were all on the same topic and used the same vocabulary. Using POS was novel. We also took a novel approach to analyzing POS. We used the hypothesis that English follows a dynamical system and the POS are trajectories from one state to another. We analyzed the sequences of POS using support vector machines and the cycles of POS using computational homology. We discovered that the POS were a very weak signal and did not support our hypothesis well. Our original goal appeared to be unobtainable with our original approach. We turned our attention to study an aspect of a more traditional approach to distinguishing documents. Latent Dirichlet Allocation (LDA) turns documents into bags-of-words then into mixture-model points. A distance function is used to cluster groups of points to discover relatedness between documents. We performed a geometric and algebraic analysis of the most popular distance functions and made some significant and surprising discoveries, described in a separate technical report.

  10. First-Language Skills of Bilingual Turkish Immigrant Children Growing up in a Dutch Submersion Context

    Science.gov (United States)

    Akoglu, Gözde; Yagmur, Kutlay

    2016-01-01

    The interdependence between the first and second language of bilingual immigrant children has not received sufficient attention in research. Most studies concentrate on mainstream language skills of immigrant pupils. In some studies, the gaps in the language development of immigrant children are documented by comparing mainstream pupils with…

  11. Documenting Sociolinguistic Variation in Lesser-Studied Indigenous Communities: Challenges and Practical Solutions

    Science.gov (United States)

    Mansfield, John; Stanford, James

    2017-01-01

    Documenting sociolinguistic variation in lesser-studied languages presents methodological challenges, but also offers important research opportunities. In this paper we examine three key methodological challenges commonly faced by researchers who are outsiders to the community. We then present practical solutions for successful variationist…

  12. Bilingual Latino Middle Schoolers on Languaging and Racialization in the US

    Science.gov (United States)

    Hesson, Sarah

    2016-01-01

    This dissertation explores bilingual Latino middle schoolers' articulated understandings of their language practices as well as the links between language practices and processes of racialization and discrimination in the US. The research was conducted in the context of an after-school program whose explicit aim was to not only document students'…

  13. Abstraction Mechanisms in the BETA Programming Language

    DEFF Research Database (Denmark)

    Kristensen, Bent Bruun; Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    1983-01-01

    . It is then necessary that the abstraction mechanisms are powerful in order to define more specialized constructs. BETA is an object oriented language like SIMULA 67 ([SIMULA]) and SMALLTALK ([SMALLTALK]). By this is meant that a construct like the SIMULA class/subclass mechanism is fundamental in BETA. In contrast......]) --- covering both data, procedural and control abstractions, substituting constructs like class, procedure, function and type. Correspondingly objects, procedure activation records and variables are all regarded as special cases of the basic building block of program executions: the entity. A pattern thus......The BETA programming language is developed as part of the BETA project. The purpose of this project is to develop concepts, constructs and tools in the field of programming and programming languages. BETA has been developed from 1975 on and the various stages of the language are documented in [BETA...

  14. The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners

    Science.gov (United States)

    Lu, Jenny; Jones, Anna; Morgan, Gary

    2016-01-01

    There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…

  15. University of Virginia open-quotes virtualclose quotes reactor facility tours

    International Nuclear Information System (INIS)

    Krause, D.R.; Mulder, R.U.

    1995-01-01

    An electronic information and tour book has been constructed for the University of Virginia reactor (UVAR) facility. Utilizing the global Internet, the document resides on the University of Virginia World Wide Web (WWW or W) server within the UVAR Homepage at http://www.virginia. edu/∼reactor/. It is quickly accessible wherever an Internet connection exists. The UVAR Homepage files are accessed with the hypertext transfer protocol (http) prefix. The files are written in hypertext markup language (HTML), a very simple method of preparing ASCII text for W3 presentation. The HTML allows use of various hierarchies of headers, indentation, fonts, and the linking of words and/or pictures to other addresses-uniform resource locators. The linking of texts, pictures, sounds, and server addresses is known as hypermedia

  16. DESENVOLVIMENTO DE UM SIMULADOR DE DIÁLOGO UTILIZANDO A LINGUAGEM AIML COM BANCO DE DADOS

    Directory of Open Access Journals (Sweden)

    André Damasceno Aoki

    2011-12-01

    Full Text Available This paper presents an application to simulate dialogues (chatterbot using the AIML language (Artificial Intelligence Markup Language with a new functionality to provide dynamic answers to database queries. This kind of applications is used to answer questions or doubts that are asked by people. Traditional systems are based in fixed answers using AIML language and they search answers in knowledge bases that offer low maintainability, compromising responses that require periodic updates. The AIML language has been extended to be applied to a greater source of knowledge (database.

  17. The Islamic State Battle Plan: Press Release Natural Language Processing

    Science.gov (United States)

    2016-06-01

    Institute for the Study of Violent Groups NATO North Atlantic Treaty Organization NLP Natural Language Processing PCorpus Permanent Corpus PDF...approaches, we apply Natural Language Processing ( NLP ) tools to a unique database of text documents collected by Whiteside (2014). His collection...from Arabic to English. Compared to other terrorism databases, Whiteside’s collection methodology limits the scope of the database and avoids coding

  18. LANGUAGE TRAINING

    CERN Multimedia

    2004-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. LANGUAGE TRAINING Françoise Benz tel. 73127 language.training@cern.ch FRENCH TRAINING General and Professional French Courses The next session will take place from 26 January to 02 April 2004. These courses are open to all persons working on the Cern site, and to their spouses. For registration and further information on the courses, please consult our Web pages: http://cern.ch/Training or contact Mrs. Benz: Tel. 73127. Writing Professional Documents in French The next session will take place from 26 January to 02 April 2004. This course is designed for people wi...

  19. LANGUAGE TRAINING

    CERN Multimedia

    2004-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. LANGUAGE TRAINING Françoise Benz tel. 73127 language.training@cern.ch FRENCH TRAINING General and Professional French Courses The next session will take place from 26 January to 02 April 2004. These courses are open to all persons working on the Cern site, and to their spouses. For registration and further information on the courses, please consult our Web pages: http://cern.ch/Training or contact Mrs. Benz : Tel. 73127. Writing Professional Documents in French The next session will take place from 26 January to 02 April 2004. This course is designed for peop...

  20. High-Level Language Production in Parkinson's Disease: A Review

    Directory of Open Access Journals (Sweden)

    Lori J. P. Altmann

    2011-01-01

    Full Text Available This paper discusses impairments of high-level, complex language production in Parkinson's disease (PD, defined as sentence and discourse production, and situates these impairments within the framework of current psycholinguistic theories of language production. The paper comprises three major sections, an overview of the effects of PD on the brain and cognition, a review of the literature on language production in PD, and a discussion of the stages of the language production process that are impaired in PD. Overall, the literature converges on a few common characteristics of language production in PD: reduced information content, impaired grammaticality, disrupted fluency, and reduced syntactic complexity. Many studies also document the strong impact of differences in cognitive ability on language production. Based on the data, PD affects all stages of language production including conceptualization and functional and positional processing. Furthermore, impairments at all stages appear to be exacerbated by impairments in cognitive abilities.