WorldWideScience

Sample records for model markup language

  1. Extensible Markup Language Data Mining System Model

    Institute of Scientific and Technical Information of China (English)

    李炜; 宋瀚涛

    2003-01-01

    The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.

  2. Modeling Hydrates and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Weihua Wang

    2007-06-01

    Full Text Available Natural gas hydrates, as an important potential fuels, flow assurance hazards, and possible factors initiating the submarine geo-hazard and global climate change, have attracted the interest of scientists all over the world. After two centuries of hydrate research, a great amount of scientific data on gas hydrates has been accumulated. Therefore the means to manage, share, and exchange these data have become an urgent task. At present, metadata (Markup Language is recognized as one of the most efficient ways to facilitate data management, storage, integration, exchange, discovery and retrieval. Therefore the CODATA Gas Hydrate Data Task Group proposed and specified Gas Hydrate Markup Language (GHML as an extensible conceptual metadata model to characterize the features of data on gas hydrate. This article introduces the details of modeling portion of GHML.

  3. Geography Markup Language

    OpenAIRE

    Burggraf, David S

    2006-01-01

    Geography Markup Language (GML) is an XML application that provides a standard way to represent geographic information. GML is developed and maintained by the Open Geospatial Consortium (OGC), which is an international consortium consisting of more than 250 members from industry, government, and university departments. Many of the conceptual models described in the ISO 19100 series of geomatics standards have been implemented in GML, and it is itself en route to becoming an ISO Standard (TC/2...

  4. A Leaner, Meaner Markup Language.

    Science.gov (United States)

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  5. Geography Markup Language

    Directory of Open Access Journals (Sweden)

    David S Burggraf

    2006-11-01

    Full Text Available Geography Markup Language (GML is an XML application that provides a standard way to represent geographic information. GML is developed and maintained by the Open Geospatial Consortium (OGC, which is an international consortium consisting of more than 250 members from industry, government, and university departments. Many of the conceptual models described in the ISO 19100 series of geomatics standards have been implemented in GML, and it is itself en route to becoming an ISO Standard (TC/211 CD 19136. An overview of GML together with its implications for the geospatial web is given in this paper.

  6. The GPlates Geological Information Model and Markup Language

    Directory of Open Access Journals (Sweden)

    X. Qin

    2012-07-01

    Full Text Available Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-dimensional spatial and 1-dimensional temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological "deep time" analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML, being an extension of the open standard Geography Markup Language (GML, is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep

  7. The GPlates Geological Information Model and Markup Language

    Directory of Open Access Journals (Sweden)

    X. Qin

    2012-10-01

    Full Text Available Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML, being an extension of the open standard Geography Markup Language (GML, is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio

  8. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  9. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  10. Development of clinical contents model markup language for electronic health records.

    Science.gov (United States)

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  11. The medical simulation markup language - simplifying the biomechanical modeling workflow.

    Science.gov (United States)

    Suwelack, Stefan; Stoll, Markus; Schalck, Sebastian; Schoch, Nicolai; Dillmann, Rüdiger; Bendl, Rolf; Heuveline, Vincent; Speidel, Stefanie

    2014-01-01

    Modeling and simulation of the human body by means of continuum mechanics has become an important tool in diagnostics, computer-assisted interventions and training. This modeling approach seeks to construct patient-specific biomechanical models from tomographic data. Usually many different tools such as segmentation and meshing algorithms are involved in this workflow. In this paper we present a generalized and flexible description for biomechanical models. The unique feature of the new modeling language is that it not only describes the final biomechanical simulation, but also the workflow how the biomechanical model is constructed from tomographic data. In this way, the MSML can act as a middleware between all tools used in the modeling pipeline. The MSML thus greatly facilitates the prototyping of medical simulation workflows for clinical and research purposes. In this paper, we not only detail the XML-based modeling scheme, but also present a concrete implementation. Different examples highlight the flexibility, robustness and ease-of-use of the approach.

  12. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development.

    Science.gov (United States)

    Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-06-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.

  13. SBRML: a markup language for associating systems biology data with models.

    Science.gov (United States)

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  14. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  15. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  16. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569

  17. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  18. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  19. Data Display Markup Language (DDML) Handbook

    Science.gov (United States)

    2017-01-31

    Moreover, the tendency of T&E is towards a plug-and-play-like data acquisition system that requires standard languages and modules for data displays...Telemetry Group DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK DISTRIBUTION A: APPROVED FOR...DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK January 2017 Prepared by Telemetry Group

  20. Extending the Concepts of Normalization from Relational Databases to Extensible-Markup-Language Databases Model

    Directory of Open Access Journals (Sweden)

    H.J. F. El-Sofany

    2008-01-01

    Full Text Available In this study we have studied the problem of how to extend the concepts of Functional Dependency (FD and normalization in relational databases to include the eXtensible Markup Language (XML model. We shown that, like relational databases, XML documents may contain redundant information and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way for converting an arbitrary XML Schema to a well-designed one, that avoids these problems. We introduced new definitions of FD and normal forms of XML Schema (X-1NF, X-2NF, X-3NF and X-BCNF. We shown that our normal forms are necessary and sufficient to ensure all conforming XML documents have no redundancies.

  1. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  2. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    Full Text Available Abstract Background Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. Methods We used the MagicDraw modelling tool to design a UML model (Flow-OM according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML. We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. Results The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow, which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets

  3. The Geometry Description Markup Language

    Institute of Scientific and Technical Information of China (English)

    RadovanChytracek

    2001-01-01

    Currently,a lot of effort is being put on designing complex detectors.A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier.A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment.However,no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files,source code (C/C++/FORTRAN),to XML and database solutions.The XML(Extensible Markup Language)has proven to provide an interesting approach for describing detector geometries,with several different but incompatible XML-based solutions existing.Therefore,interoperability and geometry data exchange among different frameworks is not possible at present.This article introduces a markup language for geometry descriptions.Its aim is to define a common approach for sharing and exchanging of geometry description data.Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML.

  4. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  5. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  6. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    Science.gov (United States)

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  7. The Effect of using Facebook Markup Language (FBML) for Designing an E-Learning Model in Higher Education

    OpenAIRE

    Mohammed Amasha; Salem Alkhalaf

    2015-01-01

    This study examines the use of Facebook Markup Language (FBML) to design an e-learning model to facilitate teaching and learning in an academic setting. The qualitative research study presents a case study on how, Facebook is used to support collaborative activities in higher education. We used FBML to design an e-learning model called processes for e-learning resources in the Specialist Learning Resources Diploma (SLRD) program. Two groups drawn from the SLRD program were used; First were th...

  8. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  9. Systems biology markup language: Level 2 and beyond.

    Science.gov (United States)

    Finney, A; Hucka, M

    2003-12-01

    The SBML (systems biology markup language) is a standard exchange format for computational models of biochemical networks. We continue developing SBML collaboratively with the modelling community to meet their evolving needs. The recently introduced SBML Level 2 includes several enhancements to the original Level 1, and features under development for SBML Level 3 include model composition, multistate chemical species and diagrams.

  10. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  11. On the Power of Fuzzy Markup Language

    CERN Document Server

    Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui

    2013-01-01

    One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible;  consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here.  The concept of a Fuzzy Control that is conceived and modeled in terms...

  12. HGML: a hypertext guideline markup language.

    Science.gov (United States)

    Hagerty, C G; Pickens, D; Kulikowski, C; Sonnenberg, F

    2000-01-01

    Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form.

  13. Standardization of seismic tomographic models and earthquake focal mechanisms data sets based on web technologies, visualization with keyhole markup language

    Science.gov (United States)

    Postpischl, Luca; Danecek, Peter; Morelli, Andrea; Pondrelli, Silvia

    2011-01-01

    We present two projects in seismology that have been ported to web technologies, which provide results in Keyhole Markup Language (KML) visualization layers. These use the Google Earth geo-browser as the flexible platform that can substitute specialized graphical tools to perform qualitative visual data analyses and comparisons. The Network of Research Infrastructures for European Seismology (NERIES) Tomographic Earth Model Repository contains data sets from over 20 models from the literature. A hierarchical structure of folders that represent the sets of depths for each model is implemented in KML, and this immediately results into an intuitive interface for users to navigate freely and to compare tomographic plots. The KML layer for the European-Mediterranean Regional Centroid-Moment Tensor Catalog displays the focal mechanism solutions or moderate-magnitude Earthquakes from 1997 to the present. Our aim in both projects was to also propose standard representations of scientific data sets. Here, the general semantic approach of an XML framework has an important impact that must be further explored, although we find the KML syntax to more emphasis on aspects of detailed visualization. We have thus used, and propose the use of, Javascript Object Notation (JSON), another semantic notation that stems from the web-development community that provides a compact, general-purpose, data-exchange format.

  14. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor- rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  15. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    TANG ZiLong

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor-rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  16. Field Data and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Ralf Löwner

    2007-06-01

    Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.

  17. The Accelerator Markup Language and the Universal Accelerator Parser

    Energy Technology Data Exchange (ETDEWEB)

    Sagan, D.; Forster, M.; /Cornell U., LNS; Bates, D.A.; /LBL, Berkeley; Wolski, A.; /Liverpool U. /Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; /CERN; Walker, N.J.; /DESY; Larrieu, T.; Roblin, Y.; /Jefferson Lab; Pelaia, T.; /Oak Ridge; Tenenbaum, P.; Woodley, M.; /SLAC; Reiche, S.; /UCLA

    2006-10-06

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.

  18. The Accelerator Markup Language and the Universal Accelerator Parser

    Energy Technology Data Exchange (ETDEWEB)

    Sagan, D.; Forster, M.; /Cornell U., LNS; Bates, D.A.; /LBL, Berkeley; Wolski, A.; /Liverpool U. /Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; /CERN; Walker, N.J.; /DESY; Larrieu, T.; Roblin, Y.; /Jefferson Lab; Pelaia, T.; /Oak Ridge; Tenenbaum, P.; Woodley, M.; /SLAC; Reiche, S.; /UCLA

    2006-10-06

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.

  19. Experimental Applications of Automatic Test Markup Language (ATML)

    Science.gov (United States)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  20. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  1. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  2. Descriptive markup languages and the development of digital humanities

    Directory of Open Access Journals (Sweden)

    Boris Bosančić

    2012-11-01

    Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML

  3. STMML. A markup language for scientific, technical and medical publishing

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available STMML is an XML-based markup language covering many generic aspects of scientific information. It has been developed as a re-usable core for more specific markup languages. It supports data structures, data types, metadata, scientific units and some basic components of scientific narrative. The central means of adding semantic information is through dictionaries. The specification is through an XML Schema which can be used to validate STMML documents or fragments. Many examples of the language are given.

  4. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    Science.gov (United States)

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  5. PIML: the Pathogen Information Markup Language.

    Science.gov (United States)

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  6. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  8. The Next Step towards a Function Markup Language

    NARCIS (Netherlands)

    Heylen, Dirk K.J.; Kopp, S.; Marsella, S.C.; Pelachaud, C.; Vilhjálmson, H.; Prendinger, H.; Lester, J.; Ishizuka, M.

    2008-01-01

    In order to enable collaboration and exchange of modules for generating multimodal communicative behaviours of robots and virtual agents, the SAIBA initiative envisions the definition of two representation languages. One of these is the Function Markup Language (FML). This language specifies the

  9. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  10. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  11. Improving Interoperability by Incorporating UnitsML Into Markup Languages.

    Science.gov (United States)

    Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W

    2010-01-01

    Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.

  12. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    Science.gov (United States)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  13. A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.

    Science.gov (United States)

    Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J

    2016-06-17

    Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.

  14. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  15. Systematic reconstruction of TRANSPATH data into cell system markup language.

    Science.gov (United States)

    Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru

    2008-06-23

    Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.

  16. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied in application program with expectation that the application program can give: (1 alternative to decide percentage markup for user, (2 Illustration of gross profit estimation that will be achieve for selected percentage markup, (3 Illustration of estimation percentage of the units sold that will be achieve for selected percentage markup, and (4 Illustration of total net income before tax will get for specific period. Abstract in Bahasa Indonesia : Penelitian ini bertujuan untuk merancang model Matematis guna menetapkan volume penjualan, sebagai alternatif untuk menentukan persentase markup harga jual produk. Model Matematis dirancang menggunakan Statistik Regresi Berganda. Volume penjualan merupakan fungsi dari variabel markup, kondisi pasar, dan kondisi pengganti. Model Matematis yang dirancang sudah memenuhi uji: asumsi atas error, akurasi model, validasi model, dan masalah multikolinearitas. Rancangan model Matematis tersebut diterapkan dalam program aplikasi dengan harapan dapat memberi: (1 alternatif bagi pengguna mengenai berapa besar markup yang sebaiknya ditetapkan, (2 gambaran perkiraan laba kotor yang akan diperoleh setiap pemilihan markup, (3 gambaran perkiraan persentase unit yang terjual setiap pemilihan markup, dan (4 gambaran total laba kotor sebelum pajak yang dapat diperoleh pada periode yang bersangkutan. Kata kunci: model Matematis, aplikasi program, volume penjualan, markup, laba kotor.

  17. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  18. Advances in aircraft design: Multiobjective optimization and a markup language

    Science.gov (United States)

    Deshpande, Shubhangi

    Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical formof the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data

  19. Development of the Plate Tectonics and Seismology markup languages with XML

    Science.gov (United States)

    Babaie, H.; Babaei, A.

    2003-04-01

    The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and

  20. Development of Markup Language for Medical Record Charting: A Charting Language.

    Science.gov (United States)

    Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung

    2015-01-01

    Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.

  1. 自动测试描述语言标准研究及其应用%Study on Automatic Test Markup Language (ATML) Standards And Its Application

    Institute of Scientific and Technical Information of China (English)

    康占祥; 戴嫣青; 杨占才

    2012-01-01

    Firstly, several major problems for automatic test markup language (ATML) standard are discussed, such as background, purpose, and model structure. Secondly, all the model definition methods and the expression manner for ATML standard are discussed, such as common, test results and session information markup language (TRML), diagnostics model markup language (DML), test description markup language (TDML), instrument description markup language (IDML), test configuration description markup language (TCML), UUT description markup language (UDML), test station markup language ( TSML), and test adapter markup language (TAML). Lastly, the automatic test system application method of ATML standards is introduced. The technology foundation is supplied for the testing resources share of the maintenance level.%分析了制定ATML标准的背景、目的及模型结构,并且对构成ATML标准所有子模型定义方法和表示方式进行了说明,其中主要包括通用要素模型(Common)、测试结果和会话信息模型(TRML)、诊断模型(DML)、测试描述模型(TDML)、仪器描述模型(IDML)、测试配置模型(TCML)、UUT描述模型(UDML)、测试站模型(TSML)及测试适配器模型(TAML)等,最后给出了ATML在自动测试系统中的应用方法,为实现武器装备各种维护级别的测试资源的共享奠定了技术基础.

  2. Pathology data integration with eXtensible Markup Language.

    Science.gov (United States)

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  3. Earth Science Markup Language: Transitioning From Design to Application

    Science.gov (United States)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  4. Visualizing Scientific Data Using Keyhole Markup Language (KML)

    Science.gov (United States)

    Valcic, L.; Bailey, J. E.; Dehn, J.

    2006-12-01

    Over the last five years there has been a proliferation in the development of virtual globe programs. Programs such as Google Earth, NASA World Wind, SkylineGlobe, Geofusion and ArcGIS Explorer each have their own strengths and weaknesses, and whether a market will remain for all tools will be determined by user application. This market is currently led by Google Earth, the release of which on 28 Jun 2005 helped spark a revolution in virtual globe technology, by bringing it into the public view and imagination. Many would argue that such a revolution was due, but it was certainly aided by the world-wide name recognition of Google, and the creation of a user-friendly interface. Google Earth is an updated version of a program originally called Earth Viewer, which was developed by Keyhole Inc. It was renamed after Google purchased Keyhole and their technology in 2001. In order to manage the geospatial data within these viewers, the developers created a new XML-based (Extensible Markup Language) called Keyhole Markup Language (KML). Through manipulation of KML scientists are finding increasingly creative and more visually appealing methods to display and manipulate their data. A measure of the success of Google Earth and KML is demonstrated by the fact that other virtual globes are now including various levels of KML compatibility. This presentation will display examples of how KML has been applied to scientific data. It will offer a forum for questions pertaining to how KML can be applied to a user's dataset. Interested parties are encouraged to bring examples of projects under development or being planned.

  5. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  6. Standard protocol for exchange of health-checkup data based on SGML: the Health-checkup Data Markup Language (HDML).

    Science.gov (United States)

    Sugimori, H; Yoshida, K; Hara, S; Furumi, K; Tofukuji, I; Kubodera, T; Yoda, T; Kawai, M

    2002-01-01

    To develop a health/medical data interchange model for efficient electronic exchange of data among health-checkup facilities. A Health-checkup Data Markup Language (HDML) was developed on the basis of the Standard Generalized Markup Language (SGML), and a feasibility study carried out, involving data exchange between two health checkup facilities. The structure of HDML is described. The transfer of numerical lab data, summary findings and health status assessment was successful. HDML is an improvement to laboratory data exchange. Further work has to address the exchange of qualitative and textual data.

  7. Geometry Description Markup Language for Physics Simulation And Analysis Applications.

    Energy Technology Data Exchange (ETDEWEB)

    Chytracek, R.; /CERN; McCormick, J.; /SLAC; Pokorski, W.; /CERN; Santin, G.; /European Space Agency

    2007-01-23

    The Geometry Description Markup Language (GDML) is a specialized XML-based language designed as an application-independent persistent format for describing the geometries of detectors associated with physics measurements. It serves to implement ''geometry trees'' which correspond to the hierarchy of volumes a detector geometry can be composed of, and to allow to identify the position of individual solids, as well as to describe the materials they are made of. Being pure XML, GDML can be universally used, and in particular it can be considered as the format for interchanging geometries among different applications. In this paper we will present the current status of the development of GDML. After having discussed the contents of the latest GDML schema, which is the basic definition of the format, we will concentrate on the GDML processors. We will present the latest implementation of the GDML ''writers'' as well as ''readers'' for either Geant4 [2], [3] or ROOT [4], [10].

  8. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language

    National Research Council Canada - National Science Library

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-01-01

    .... In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments...

  9. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    Directory of Open Access Journals (Sweden)

    Fajrin Azwary

    2016-04-01

    Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML

  10. The basics of CrossRef extensible markup language

    Directory of Open Access Journals (Sweden)

    Rachael Lammey

    2014-08-01

    Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.

  11. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  12. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  13. SuML: A Survey Markup Language for Generalized Survey Encoding

    Science.gov (United States)

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  14. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    Science.gov (United States)

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  15. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  16. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  17. The Multiscale Modeling Markup Language for Collaboration Research of Physiome%用于生理组学协同研究的跨层次建模标记语言(M3L)

    Institute of Scientific and Technical Information of China (English)

    韩道; 明星; 李胜辉; 刘谦

    2011-01-01

    根据生理组学多领域多层次协同研究的需求,针对中国生理组学研究的特点,设计了一种跨层次建模标记语言(Multiscale Modeling Markup Language,简称M3L),规范研究过程中的信息描述方法,实现了数学模型和计算数据的开放式共享和重用.M3L包含结构化模型机制、数据存档机制、信息交互与重用机制、数学描述机制、可视化模型及附件描述机制.M3L中针对人体和小动物解剖结构数据的特殊设计实现了结构与功能信息的关联,促进了中国生理组学研究中模型和数据的标准化描述;M3L以跨层次结构化的方式描述模型,符合生理组学研究模型的特点,满足了协同开发的需求.%According to the requirements of multidisciplinary and multiscale collaboration research of physiome, the Multiscale Modeling Markup Language ( M3L) is designed to standardize the method of information description. Using the M3L, the sharing and reuse of mathematical models and computational data has been achieved. The mechanism of M3L consists of model structuring, data archiving, information exchanging and reuse, description of mathematieal algorithm and visualization files and others outcomes, The special designing in the M3L for anatomical data of human and small animals has constructed the connection between structural and functional information and facilitated the standardized description of models and data in Chinese physiome research; the description of models which is multiscale and structurized using the M3L meets the characteristics of models and collaboration development in physiome research.

  18. Developing a Markup Language for Encoding Graphic Content in Plan Documents

    Science.gov (United States)

    Li, Jinghuan

    2009-01-01

    While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…

  19. ICAAP eXtended Markup Language: Exploiting XML and Adding Value to the Journals Production Process.

    Science.gov (United States)

    Sosteric, Mike

    1999-01-01

    Discusses the technological advances attained by the International Consortium for Alternative Academic Publication (ACAAP) aimed at reforming the scholarly communication system and providing an alternative to high-priced commercial publishing. Describes the eXtended markup language, based on SGML and HTML, that provides indexing and…

  20. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  1. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  2. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    Science.gov (United States)

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  3. A Model for Contract Bid Markup Strategies

    Institute of Scientific and Technical Information of China (English)

    LI Yong-ping; CHEN Rong-qiu; ZHOU Shao-fu

    2001-01-01

    This paper presents a bid model which can be readily implemented in a competitive bid environment within the construction industry. Competitive bid situations involve a multiplicity of criteria. In this paper, 31 criteria, which effect optimum estimation, are considered. The model is based on CBR, the database of past completed bids, and information associated with successful bids. CBR is considered the best method because of the complexity of the construction domain and the wealth of information that the contractors possessed on past bids.

  4. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    Science.gov (United States)

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  5. ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)

    Science.gov (United States)

    Sailors, R. Matthew

    2001-01-01

    It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)

  6. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  7. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  8. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Directory of Open Access Journals (Sweden)

    Waltemath Dagmar

    2011-12-01

    Full Text Available Abstract Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML. SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s used

  9. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.

  10. Hypertext markup language as an authoring tool for CD-ROM production.

    Science.gov (United States)

    Lynch, P J; Horton, S

    1998-01-01

    The Hypertext Markup Language (HTML) used to create Web pages is an attractive alternative to the proprietary authoring software that is now widely used to produce multimedia content for CD-ROMs. This paper describes the advantages and limitations of HTML as a non-proprietary and cross-platform CD-ROM authoring system, and the more general advantages of HTML as data standard for biocommunications content.

  11. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  12. Evolution of Web-Based Applications Using Domain-Specific Markup Languages

    Directory of Open Access Journals (Sweden)

    Guntram Graef

    2000-11-01

    Full Text Available The lifecycle of Web-based applications is characterized by frequent changes to content, user interface, and functionality. Updating content, improving the services provided to users, drives further development of a Web-based application. The major goal for the success of a Web-based application becomes therefore its evolution. Though, development and maintenance of Web-based applications suffers from the underlying document-based implementation model. A disciplined evolution of Web based applications requires the application of software engineering practice for systematic further development and reuse of software artifacts. In this contribution we suggest to adopt the component paradigm to development and evolution of Web-based applications. The approach is based on a dedicated component technology and component-software architecture. It allows abstracting from many technical aspects related to the Web as an application platform by introducing domain specific markup languages. These languages allow the description of services, which represent domain components in our Web-component-software approach. Domain experts with limited knowledge of technical details can therefore describe application functionality and the evolution of orthogonal aspects of the application can be de-coupled. The whole approach is based on XML to achieve the necessary standardization and economic efficiency for the use in real world projects.

  13. A two-way interface between limited Systems Biology Markup Language and R.

    Science.gov (United States)

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  14. A two-way interface between limited Systems Biology Markup Language and R

    Directory of Open Access Journals (Sweden)

    Radivoyevitch Tomas

    2004-12-01

    Full Text Available Abstract Background Systems Biology Markup Language (SBML is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML( which maps this R model structure to SBML level 2, and read.SBML( which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  15. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    Science.gov (United States)

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  16. Root system markup language: toward a unified root architecture description language.

    Science.gov (United States)

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow.

  17. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  18. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  19. Modeling Technology for Automatic Test System Software Based on Automatic Test Markup Language Standard%基于ATML标准的ATS软件建模技术

    Institute of Scientific and Technical Information of China (English)

    杨占才; 王红; 范利花; 张桂英; 杨小辉

    2013-01-01

      论述了国外ATML标准体系结构和构成ATML标准所有子模型的描述方法,提出了在现有ATS软件平台基础上,实现兼容ATML标准所需的建模流程设计、模型识别及模型运行流程设计等技术途径,为实现ATS软件平台的通用性、开放性及武器装备各种维护级别的测试资源的共享奠定了技术基础。%  all the model definition method, system architecture and the expression manner for ATML standard are discussed. Several major technology problems for the existing automatic test system software platform compatible with ATML standard are presented, such as the design for modeling flow, model identification and model running flow. All the Technology Foundation is supplied for resolving the general and open issues of the automatic test system software platform, and the testing resources share for all the maintenance level.

  20. Normalized Relational Storage for Extensible Markup Language (XML Schema

    Directory of Open Access Journals (Sweden)

    Kamsuriah Ahmad

    2011-01-01

    Full Text Available Problem statement: The use of XML as the common formats for representing, exchanging, storing, integrating and accessing data posses many new challenges to database systems. Most of application data are stored in relational databases due to its popularity and rich development experiences over it. Therefore, how to provide a proper mapping approach from XML model to relational model become the major research problems. Current techniques for managing XML in relational technology consider only the structure of an XML document and ignore its semantics as expressed by keys and functional dependencies. Approach: In this study we present an algorithm for generating an optimal design for XML in relational setting. The algorithm is based on computing a set of minimum covers for all functional dependencies on a universal relation when given XML Functional Dependencies (XFDs and the schema information. However we need to deal with the hierarchical nature of XML and to define XFDs in this structure. Results: We show that our algorithm is efficient in terms of reducing data redundancy and preserving semantic expression. Conclusion/Recommendations: Being able to infer XML functional dependencies constraints to relational views of XML data is a first step towards establishing a connection between XML and its relational representation at the semantic level.

  1. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  2. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  3. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    Science.gov (United States)

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  4. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  5. Extensible Markup Language: How Might It Alter the Software Documentation Process and the Role of the Technical Communicator?

    Science.gov (United States)

    Battalio, John T.

    2002-01-01

    Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…

  6. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  7. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  8. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  9. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  10. XML (eXtensible Mark-up Language) Industrial Standard, Determining Architecture of the Next Generation of the Internet Software

    CERN Document Server

    Galaktionov, V V

    2000-01-01

    The past 1999 became the period of standing of the new Internet technology - XML (eXtensible Mark-up Language), the language of sectoring established by a Consortium WWW (http://www.w3.org) as a new industrial standard, determining architecture of the next generation Internet software. In this message the results of a research of this technology, basic opportunities XML, rules and recommendations for its application are given.

  11. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  12. Histoimmunogenetics Markup Language 1.0: Reporting next generation sequencing-based HLA and KIR genotyping.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2015-12-01

    We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  14. Patient information exchange guideline MERIT-9 using medical markup language MML.

    Science.gov (United States)

    Kimura, M; Ohe, K; Yoshihara, H; Ando, Y; Kawamata, F; Hishiki, T; Ohashi, K; Sakusabe, T; Tani, S; Akiyama, M

    1998-01-01

    To realize clinical data exchange between healthcare providers, there must be many standards in many layers. Terms and codes should be standardized, syntax to wrap the data must be mutually parsable, then transfer protocol or exchange media should be agreed. Among many standards for the syntax, HL7 and DICOM are most successful. However, everything could not be handled by HL7 solely. DICOM is good for radiology images, but, other clinical images are already handled by other "lighter" data formats like JPEG, TIFF. So, it is not realistic to use only one standard for every area of clinical information. For description of medical records, especially for narrative information, we created SGML DTD for medical information, called MML (Medical Markup Language). It is already implemented in more than 10 healthcare providers in Japan. As it is a hierarchical description of information, it is easily used as a basis of object request brokering. It is again not realistic to use MML solely for clinical information in various level of detail. Therefore, we proposed a guide-line for use of available medical standards to facilitate clinical information exchange between healthcare providers. It is called MERIT-9 (MEdical Records, Images, Texts,--Information eXchange). A typical use is HL7 files, DICOM files, referred from an MML file in a patient record, as external entities. Both MML and MERIT-9 are research projects of Japanese Ministry of Health and Welfare, and the purpose is to facilitate clinical data exchanges. They are becoming to be used in technical specifications for new hospital information systems in Japan.

  15. LOG2MARKUP: State module to transform a Stata text log into a markup document

    DEFF Research Database (Denmark)

    2016-01-01

    log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...... other users may decide on all sorts of markup languages, eg HTML or LaTex. The key is that markup of Stata code and Stata output can be set by the options....

  16. XML Based Virtual Instrument Markup Language--VIML%基于XML的虚拟仪器标记语言--VIML

    Institute of Scientific and Technical Information of China (English)

    钟莹; 陈祥献

    2002-01-01

    详细讨论了如何基于XML语言来定义一套用于描述虚拟仪器系统的特定领域的标记语言--VIML(Virtual Instrument Markup Language).通过分析虚拟仪器系统的特点和运行方式,在元语言XML的基础上对虚拟仪器组件、端口、连接关系等进行了元素定义,规范了虚拟仪器系统的描述方法.

  17. UOML: An Unstructured Operation Markup Language%UOML:一种非结构化操作标记语言

    Institute of Scientific and Technical Information of China (English)

    王东临; 姜海峰; 张常有

    2007-01-01

    书面文档是非结构化信息的主要表现形式之一.本文提出了一种非结构化文档的互操作标准,并完成了其规范语言UOML(Unstructured Operation Markup Language)方案,定义了详尽的操作接口规范.操作接口基于XML表达,实现平台无关性.

  18. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  19. The feature and analysis of GML--Geography Markup Language%GML-地理标记语言特征与分析

    Institute of Scientific and Technical Information of China (English)

    梁明; 鲍艳; 黄朝华

    2002-01-01

    针对GIS行业中的数据交换与共享的困难,在讨论了XML(Extensible Markup Language)这一Web新行业标准的出现给GIS带来了希望的基础上,进一步详细地论述和分析了GML(Geography Markup Language)的特征,讨论了GML已逐渐成为大家所接受并容易理解的一种空间信息的交换格式.

  20. A program code generator for multiphysics biological simulation using markup languages.

    Science.gov (United States)

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  1. Supporting Online Video-Based Correction for Language Learning through Markup-Based Video Editing.

    Science.gov (United States)

    Hada, Yoshiaki; Ogata, Hiroaki; Yano, Yoneo

    This paper focuses on an online video based correction system for language learning. The prototype system using the proposed model supports learning between a native English teacher and a non-native learner using a videoconference system. It extends the videoconference system so that it can record the conversation of a learning scene. If a teacher…

  2. Using commercially available off-the-shelf software and hardware to develop an intranet-based hypertext markup language teaching file.

    Science.gov (United States)

    Wendt, G J

    1999-05-01

    This presentation describes the technical details of implementing a process to create digital teaching files stressing the use of commercial off-the-shelf (COTS) software and hardware and standard hypertext markup language (HTML) to keep development costs to a minimum.

  3. Endogenous Markups, Firm Productivity and International Trade:

    DEFF Research Database (Denmark)

    Bellone, Flora; Musso, Patrick; Nesta, Lionel

    In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2......) markups are positively related to firm productivity; 3) markups are negatively related to import penetration; 4) markups are positively related to firm export intensity and markups are higher on the export market than on the domestic ones in the presence of trade barriers and/or if competitors...

  4. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  5. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    Science.gov (United States)

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  6. The latest MML (Medical Markup Language) version 2.3--XML-based standard for medical data exchange/storage.

    Science.gov (United States)

    Guo, Jinqiu; Araki, Kenji; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takada, Akira; Suzuki, Toshiaki; Nakashima, Yusei; Yoshihara, Hiroyuki

    2003-08-01

    As a set of standards, Medical Markup Language (MML) has been developed over the last 8 years to allow the exchange of medical data between different medical information providers MML version 2.21 was characterized by XML as metalanguage and was announced in 1999, at which time full-scale implementation tests were carried out; subsequently, various information and functional inadequacies were discovered in this version. MML was therefore updated to version 2.3 in 2001. At present, MML contains 12 MML modules including the new referral, test result, and report modules. In version 2.3, the group ID element was added; the access right definition and health insurance module were amended.

  7. Markup in Engineering Design: A Discourse

    Directory of Open Access Journals (Sweden)

    Shaofeng Liu

    2010-03-01

    Full Text Available Today’s engineering companies are facing unprecedented competition in a global market place. There is now a knowledge intensive shift towards whole product lifecycle support, and collaborative environments. It has become particularly important to capture information, knowledge and experiences about previous design and following stages during their product lifecycle, so as to retrieve and reuse such information in new and follow-on designs activities. Recently, with the rapid development and adoption of digital technologies, annotation and markup are becoming important tools for information communication, retrieval and management. Such techniques are being increasingly applied to an array of applications and different digital items, such as text documents, 2D images and 3D models. This paper presents a state-of-the-art review of recent research in markup for engineering design, including a number of core markup languages and main markup strategies. Their applications and future utilization in engineering design, including multi-viewpoint of product models, capture of information and rationale across the whole product lifecycle, integration of engineering design processes, and engineering document management, are comprehensively discussed.

  8. Precision markup modeling and display in a global geo-spatial environment

    Science.gov (United States)

    Wartell, Zachary J.; Ribarsky, William; Faust, Nickolas L.

    2003-08-01

    A data organization, scalable structure, and multiresolution visualization approach is described for precision markup modeling in a global geospatial environment. The global environment supports interactive visual navigation from global overviews to details on the ground at the resolution of inches or less. This is a difference in scale of 10 orders of magnitude or more. To efficiently handle details over this range of scales while providing accurate placement of objects, a set of nested coordinate systems is used, which always refers, through a series of transformations, to the fundamental world coordinate system (with its origin at the center of the earth). This coordinate structure supports multi-resolution models of imagery, terrain, vector data, buildings, moving objects, and other geospatial data. Thus objects that are static or moving on the terrain can be displayed without inaccurate positioning or jumping due to coordinate round-off. Examples of high resolution images, 3D objects, and terrain-following annotations are shown.

  9. Modeling of the positioning system and visual mark-up of historical cadastral maps

    Directory of Open Access Journals (Sweden)

    Tomislav Jakopec

    2013-03-01

    Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for

  10. Modelling language

    CERN Document Server

    Cardey, Sylviane

    2013-01-01

    In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int

  11. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    Science.gov (United States)

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  12. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  13. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    Science.gov (United States)

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  14. THE DESIGN OF INSTALLATION AGRICULTURE TECHNOLOGY MARKUP LANGUAGE AND SCHEMA%设施农业技术标记语言及其Schema的设计

    Institute of Scientific and Technical Information of China (English)

    陈桦; 杨悦欣; 孙波

    2005-01-01

    详细讨论了如何基于XML来定义一套用于描述设施农业技术的特定领域的标记语言--IATML(Installation Agriculture Technology Markup Language),在元语言XML的基础上对设施农业技术的基本构成元素进行了分析、定义,建立了IATMLSchema,规范了设施农业技术的描述.

  15. 基于扩展标记语言的Internet市场的服务描述%Describing Services on an Internet Marketplace Using Extensible Markup Language

    Institute of Scientific and Technical Information of China (English)

    周笑波; 杜鹏; 陈贵海; 陈道蓄; 谢立

    2000-01-01

    作为一种应用系统从位于Internet上的远程服务结点获取数据和计算性服务并进行集成处理的机制,Internet市场受到了广泛重视.该文给出了一种基于扩展标记语言(extensible markup language,简称XML)的Internet市场服务描述方式.它较好地权衡了市场构架与顾客及服务提供者的观点.

  16. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities’ libraries

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Introduction: Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries’ Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. Materials and Methods: The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. Results: The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). Conclusion: It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers

  17. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities' libraries.

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries' Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources

  18. Markups and Exporting Behavior

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic Michel Patrick

    2012-01-01

    estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge...

  19. 简单介绍可扩展标记语言XML%A Brief Introduction to XML Extensible Markup Language

    Institute of Scientific and Technical Information of China (English)

    源艳芬; 梁慎青

    2010-01-01

    XML全称EXtensible Markup Language,翻译为扩展的标记语言,是Intemet环境中跨平台的,依赖于内容的技术,是当前处理结构化文档信息的有力工具.它与HTML一样,都是SOML(Standard Generalized Markup Language,标准通用标记语言).比SGML简单得多,比HTML更令编程人员喜爱,它包含了很多SGML特性,继承了SGML的优点,文档结构严谨,层次分明,语义更加明确,具有良好的可读性、易编写和易维护等特性,而且使得多媒体信息在不同的系统之间相互交流成为现实.本文通过例子,简单介绍XML的使用,从中感受XML极其简单易于掌握和使用.

  20. Analysis on the Standard of TTS Oriented Markup Language%面向文语转换的标记语言标准的研究

    Institute of Scientific and Technical Information of China (English)

    岳东剑; 柴佩琪; 宣国荣

    2000-01-01

    目前大多数的语音合成系统都是独立的工作,它们之间互不兼容,不能共享界面.因为缺少相关标准,使得语音合成器与其他系统的集成变得很困难.制定一个面向文语转换或语音合成的文档标记语言标准是解决这个问题的有效方法.本文通过对语音合成标记语言SSML(Speech Synthesis Markup Language)、口语文本标记语言STML(Spoken Text Markup Language)、JSML和一个针对TTS的基于XML/SGML的标记语言方案标准草案SABLE的分析,研究了面向文语转换系统TTS的标记语言标准化的问题.

  1. A New Standard of Data Exchanging and Processing over the Web: Extensible Markup Language%网络数据交换和处理的新标准--可扩展标签语言

    Institute of Scientific and Technical Information of China (English)

    岳琪; 张静; 张瑶; 梁颖红; 王育英

    2001-01-01

    XML(Extensible Markup Language.可扩展标签语言)是一种源于SGML(standard Generalized Markup language)的Web数据格式描述语言.它的数据内容具独立性和自解释性.使它的信息可以被任何应用程序利用.介绍了XML的概念、特点.并通过一个XML文档来说明此类文档的书写方式及独立设计原则.

  2. What Digital Imaging and Communication in Medicine (DICOM) could look like in common object request broker (CORBA) and extensible markup language (XML).

    Science.gov (United States)

    Van Nguyen, A; Avrin, D E; Tellis, W M; Andriole, K P; Arenson, R L

    2001-06-01

    Common object request broker architecture (CORBA) is a method for invoking distributed objects across a network. There has been some activity in applying this software technology to Digital Imaging and Communications in Medicine (DICOM), but no documented demonstration of how this would actually work. We report a CORBA demonstration that is functionally equivalent and in some ways superior to the DICOM communication protocol. In addition, in and outside of medicine, there is great interest in the use of extensible markup language (XML) to provide interoperation between databases. An example implementation of the DICOM data structure in XML will also be demonstrated. Using Visibroker ORB from Inprise (Scotts Valley, CA), a test bed was developed to simulate the principle DICOM operations: store, query, and retrieve (SQR). SQR is the most common interaction between a modality device application entity (AE) such as a computed tomography (CT) scanner, and a storage component, as well as between a storage component and a workstation. The storage of a CT study by invoking one of several storage objects residing on a network was simulated and demonstrated. In addition, XML database descriptors were used to facilitate the transfer of DICOM header information between independent databases. CORBA is demonstrated to have great potential for the next version of DICOM. It can provide redundant protection against single points of failure. XML appears to be an excellent method of providing interaction between separate databases managing the DICOM information object model, and may therefore eliminate the common use of proprietary client-server databases in commercial implementations of picture archiving and communication systems (PACS).

  3. An Attempt to Construct a Database of Photographic Data of Radiolarian Fossils with the Hypertext Markup Language

    OpenAIRE

    磯貝, 芳徳; 水谷, 伸治郎; Yoshinori, Isogai; Shinjiro, MIZUTANI

    1998-01-01

    放散虫化石の走査型電子顕微鏡写真のコレクションを,Hypertext Markup Languageを用いてデータベース化した.このデータベースは約千枚の放散虫化石の写真を現時点でもっており,化石名,地質学的年代,発掘地名など多様な視点から検索することができる.このデータベースの構築によって,計算機やデータベースについて特別な技術を持っていない通常の研究者が,自身のデータベースを自らの手で構築しようとするとき,Hypertext Markup Languageが有効であることを示した.さらにインターネットを経由して,誰でもこのデータベースを利用できる点は,Hypertext Markup Languageを用いたデータベースの特筆するき特徴である.データベース構築の過程を記述し,現況を報告する.さらに当データベース構築の背景にある考えや問題点について議論する....

  4. Research on Holistic Index Method of Geography Markup Language%地理标记语言的一体化索引方法研究

    Institute of Scientific and Technical Information of China (English)

    张海涛; 原立峰; 姜杰

    2009-01-01

    Currently,the OGC GML (Geography Markup Language) specification has been the de facto standard for GIS data sharing and exchanging and spatial interoperation.Adopting nested association expression approach of XML data,GML data documents can store both spatial information and semantic relationship information of geographical elements.To improve the efficiency of path query on GML two type information,the paper describes a holistic index method for GML data,which we call EKR+ (optimized R+-tree base on Extend Region Code and K-means extent partition of GML feature elements).The experi-ment results show that the efficiency of semantic-spatial query can be improved greatly by utilizing EKR+.

  5. Clu-GML. An algorithm for clustering geography markup language documents by structure%GML文档结构聚类算法Clu-GML

    Institute of Scientific and Technical Information of China (English)

    苗建新; 吉根林

    2008-01-01

    提出了一种geography markup language(GML)文档结构聚类新算法CIu-GML,与其它相关算法不同,该算法在凝聚的层次聚类中引入代表树的计算,通过计算最大频繁Induced子树得到簇的代表树,通过对代表树的比较发现新的簇,并更新新簇的代表树来完成聚类,不仅减少了聚类的时间开销,而且为每个簇形成聚类描述.实验结果表明算法CIu-GML是有效的,且性能优于其它同类算法.

  6. The development of MML (Medical Markup Language) version 3.0 as a medical document exchange format for HL7 messages.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2004-12-01

    Medical Markup Language (MML), as a set of standards, has been developed over the last 8 years to allow the exchange of medical data between different medical information providers. MML Version 2.21 used XML as a metalanguage and was announced in 1999. In 2001, MML was updated to Version 2.3, which contained 12 modules. The latest version--Version 3.0--is based on the HL7 Clinical Document Architecture (CDA). During the development of this new version, the structure of MML Version 2.3 was analyzed, subdivided into several categories, and redefined so the information defined in MML could be described in HL7 CDA Level One. As a result of this development, it has become possible to exchange MML Version 3.0 medical documents via HL7 messages.

  7. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    Science.gov (United States)

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  8. 基于超文本标记语言的信息隐藏方法研究与实现%Research and Implementation of the Information Hiding Technology Based on Hypertext Markup Language

    Institute of Scientific and Technical Information of China (English)

    吴大胜

    2011-01-01

    This paper analyzes the structure of the hypertext files,proposes a new information hiding technology based on hypertext markup language,and analyze the algorithm.%通过分析超文本的文件结构,提出了一种基于超文本标记语言的信息隐藏方法,并对该算法进行了分析。

  9. PENGUKURAN KINERJA BEBERAPA SISTEM BASIS DATA RELASIONAL DENGAN KEMAMPUAN MENYIMPAN DATA BERFORMAT GML (GEOGRAPHY MARKUP LANGUAGE YANG DAPAT DIGUNAKAN UNTUK MENDASARI APLIKASI-APLIKASI SISTEM INFORMASI GEOGRAFIS

    Directory of Open Access Journals (Sweden)

    Adi Nugroho

    2009-01-01

    Full Text Available If we want to represent spatial data to user using GIS (Geographical Information System applications, we have 2 choices about the underlying database, that is general RDBMS (Relational Database Management System for saving general spatial data (number, char, varchar, etc., or saving spatial data in GML (Geography Markup Language format. (GML is an another XML’s special vocabulary for spatial data. If we choose GML for saving spatial data, we also have 2 choices, that is saving spatial data in XML Enabled Database (relational databases that can be use for saving XML data or we can use Native XML Database (NXD, that is special databases that can be use for saving XML data. In this paper, we try to make performance comparison for several XML Enabled Database when we do GML’s CRUD (Create-Read-Update-Delete operations to these databases. On the other side, we also want to see flexibility of XML Enabled Database from programmers view.

  10. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  11. Network Application Server Using Extensible Mark-Up Language (XML) to Support Distributed Databases and 3D Environments

    Science.gov (United States)

    2001-12-01

    xml:lang language code (as per XML 1.0 spec) dir direction for weak/neutral text --> <!ENTITY % i18n "lang %LanguageCode; #IMPLIED...34> 142 <!ENTITY % attrs "%coreattrs; % i18n ; %events;"> <!-- text alignment for p, div, h1-h6. The default is align="left" for ltr headings, "right...the namespace URI designates the document profile --> 143 <!ELEMENT html (head, body)> <!ATTLIST html % i18n ; xmlns

  12. Medical problem and document model for natural language understanding.

    Science.gov (United States)

    Meystre, Stephanie; Haug, Peter J

    2003-01-01

    We are developing tools to help maintain a complete, accurate and timely problem list within a general purpose Electronic Medical Record system. As a part of this project, we have designed a system to automatically retrieve medical problems from free-text documents. Here we describe an information model based on XML (eXtensible Markup Language) and compliant with the CDA (Clinical Document Architecture). This model is used to ease the exchange of clinical data between the Natural Language Understanding application that retrieves potential problems from narrative document, and the problem list management application.

  13. Inventories, markups and real rigidities in sticky price models of the Canadian economy

    OpenAIRE

    Kryvtsov, Oleksiy; Midrigan, Virgiliu

    2011-01-01

    Recent New Keynesian models of macroeconomy view nominal cost rigidities, rather than nominal price rigidities, as the key feature that accounts for the observed persistence in output and inflation. Kryvtsov and Midrigan (2010a,b) reassess these conclusions by combining a theory based on nominal rigidities and storable goods with direct evidence on inventories for the U.S. This paper applies Kryvtsov and Midrigan's model to the case of Canada. The model predicts that if costs of production ar...

  14. GIBS Keyhole Markup Language (KML)

    Data.gov (United States)

    National Aeronautics and Space Administration — The KML documentation standard provides a solution for imagery integration into mapping tools that utilize support the KML standard, specifically Google Earth. Using...

  15. Design and Optimization of Extensible Vector Graphics Markup Language and its Integration with Knowledge-based System%XvgML语言的设计、优化及其与知识系统的集成

    Institute of Scientific and Technical Information of China (English)

    张国钢; 王建华; 武安波

    2003-01-01

    电气图是一种描述电气系统或装置的结构、原理、功能等的工程语言,在电气系统设计中具有十分重要的地位.为此,给出了一种基于XML(Extensible Markup Language)元语言实现的矢量电气图标记语言XvgML(Extensible Vector Graphics Markup Language),介绍了其设计、实现与优化过程.XvgML兼具表达几何特征和电气物理量,因而有效地克服了一些矢量图形语言对电气系统描述能力的不足.XvgML继承了XML的规范、简单、可扩展等特点,是一种对象化描述的信息语言,适合于与知识系统集成的应用.XvgML在基于Web的智能设计系统、产品报价系统和异构系统数据交换等方面有着较好的应用前景.

  16. Graphical Modeling Language Tool

    NARCIS (Netherlands)

    Rumnit, M.

    2003-01-01

    The group of the faculty EE-Math-CS of the University of Twente is developing a graphical modeling language for specifying concurrency in software design. This graphical modeling language has a mathematical background based on the theorie of CSP. This language contains the power to create trustworth

  17. Avionics Architecture Modelling Language

    Science.gov (United States)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  18. 一个基于SGML的面向语音合成的标记语言的分析%Analysis of a SGML-based Markup Language for Speech Synthesis

    Institute of Scientific and Technical Information of China (English)

    岳东剑; 柴佩琪; 宣国荣

    2000-01-01

    讨论了语音合成系统,在输入文档中加入注释标记的重要性和必要性;以及说明为了实现合成器之间的兼容,便于它们与其他系统集成,而制定一个统一的文本标记注释方案的重要性.在此基础上,着重分析和研究了一个基于SGML的面向语音合成的标记语言SSML( Speech Synthesis Markup Language)的标记设计,特别是实现韵律控制的标记设计等相关问题.

  19. 基于GML协同线路勘测设计信息传输的研究%Study on information transmission related to railway route reconnaissance design based on GML (Geographical Markup Language) coordination

    Institute of Scientific and Technical Information of China (English)

    刘开南

    2006-01-01

    铁路线路的勘测与设计是铁路建设中最主要的两个环节,文章提出了一种新的基于GML(Geography Markup Language)的数据交换及存储方法.应用GML描述设计中各图素、特征及其关系,以及将勘测者的勘测信息完整传输给设计者,以及将一个设计者的设计信息完整的传输给其它设计者,满足数据交换的要求.

  20. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  1. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...

  2. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  3. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  4. Chemical markup, XML, and the World Wide Web. 4. CML schema.

    Science.gov (United States)

    Murray-Rust, Peter; Rzepa, Henry S

    2003-01-01

    A revision to Chemical Markup Language (CML) is presented as a XML Schema compliant form, modularized into nonchemical and chemical components. STMML contains generic concepts for numeric data and scientific units, while CMLCore retains most of the chemical functionality of the original CML 1.0 and extends it by adding handlers for chemical substances, extended bonding models and names. We propose extension via new namespaced components for chemical queries, reactions, spectra, and computational chemistry. The conformance with XML schemas allows much greater control over datatyping, document validation, and structure.

  5. Coordination models and languages

    NARCIS (Netherlands)

    Papadopoulos, G.A.; Arbab, F.

    1998-01-01

    A new class of models, formalisms and mechanisms has recently evolved for describing concurrent and distributed computations based on the concept of ``coordination''. The purpose of a coordination model and associated language is to provide a means of integrating a number of possibly heterogeneous c

  6. Trade reforms, mark-ups and bargaining power of workers: the case ...

    African Journals Online (AJOL)

    Trade reforms, mark-ups and bargaining power of workers: the case of ... from firms' market power; which is negatively associated with to decline with trade reforms. ... model of mark-up with labor bargaining power was estimated using random ...

  7. Wine Price Markup in California Restaurants

    OpenAIRE

    Amspacher, William

    2011-01-01

    The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.

  8. Planned growth as a determinant of the markup: the case of Slovenian manufacturing

    Directory of Open Access Journals (Sweden)

    Maks Tajnikar

    2009-11-01

    Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.

  9. Development of Classification Markup Language of International Classification of Functioning, Disability and Health%《国际功能、残疾和健康分类》管理信息平台的开发①

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    Management information platform of International Classification of Functioning, Disability and Health (ICF) was developed based on the Classification Markup Language (ClaML). The standard representative and management of the ICF structure, encoding system, classes and semantics had developed, and the functions such as the classification editing, modification, search, viewing and checking had been realized. This system can be used in classification import, export and application in other systems.%  应用当代医学知识管理架构,通过知识管理和文献研究,采用分类标识语言(ClaML)技术,开发世界卫生组织核心国际医学分类标准《国际功能、残疾和健康分类》(ICF)管理信息平台。管理信息平台对ICF的结构、编码系统、全部分类内容和内在语义实现了标准化的电子表征和管理,并在此基础上实现了ICF分类编辑、分类修改、分类查询、分类视图、分类合理性检查等功能,以及分类导入、分类输出、分类发布等应用。研究成果可以用于ICF维护、多语种对照更新以及ICF应用平台建设等领域。

  10. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    2004-01-01

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  11. Usability of XML Query Languages

    NARCIS (Netherlands)

    Graaumans, J.P.M.

    2005-01-01

    The eXtensible Markup Language (XML) is a markup language which enables re-use of information. Specific query languages for XML are developed to facilitate this. There are large differences between history, design goal, and syntax of the XML query languages. However, in practice these languages are

  12. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    in empirical industrial organization often rely on the availability of very detailed market-level data with information on prices, quantities sold, characteristics of products and more recently supplemented with consumer-level attributes. Often, both researchers and government agencies cannot rely...... on such detailed data, but still need an assessment of whether changes in the operating environment of firms had an impact on markups and therefore on consumer surplus. In this paper, we derive an estimating equation to estimate markups using standard production plant-level data based on the insight of Hall (1986...... and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...

  13. A multiscale framework based on the physiome markup languages for exploring the initiation of osteoarthritis at the bone-cartilage interface.

    Science.gov (United States)

    Shim, Vickie B; Hunter, Peter J; Pivonka, Peter; Fernandez, Justin W

    2011-12-01

    The initiation of osteoarthritis (OA) has been linked to the onset and progression of pathologic mechanisms at the cartilage-bone interface. Most importantly, this degenerative disease involves cross-talk between the cartilage and subchondral bone environments, so an informative model should contain the complete complex. In order to evaluate this process, we have developed a multiscale model using the open-source ontologies developed for the Physiome Project with cartilage and bone descriptions at the cellular, micro, and macro levels. In this way, we can effectively model the influence of whole body loadings at the macro level and the influence of bone organization and architecture at the micro level, and have cell level processes that determine bone and cartilage remodeling. Cell information is then passed up the spatial scales to modify micro architecture and provide a macro spatial characterization of cartilage inflammation. We evaluate the framework by linking a common knee injury (anterior cruciate ligament deficiency) to proinflammatory mediators as a possible pathway to initiate OA. This framework provides a "virtual bone-cartilage" tool for evaluating hypotheses, treatment effects, and disease onset to inform and strengthen clinical studies.

  14. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  15. A Content Markup Language for Data Services

    Science.gov (United States)

    Noviello, C.; Acampa, P.; Mango Furnari, M.

    Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.

  16. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    Estimating markups has a long tradition in industrial organization and international trade. Economists and policy makers are interested in measuring the effect of various competition and trade policies on market power, typically measured by markups. The empirical methods that were developed in em...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets.......Estimating markups has a long tradition in industrial organization and international trade. Economists and policy makers are interested in measuring the effect of various competition and trade policies on market power, typically measured by markups. The empirical methods that were developed...... in empirical industrial organization often rely on the availability of very detailed market-level data with information on prices, quantities sold, characteristics of products and more recently supplemented with consumer-level attributes. Often, both researchers and government agencies cannot rely...

  17. Language Models With Meta-information

    NARCIS (Netherlands)

    Shi, Y.

    2014-01-01

    Language modeling plays a critical role in natural language processing and understanding. Starting from a general structure, language models are able to learn natural language patterns from rich input data. However, the state-of-the-art language models only take advantage of words themselves, which

  18. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    Science.gov (United States)

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  19. Natural language modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K. [Sandia National Labs., Albuquerque, NM (United States)

    1997-11-01

    This seminar describes a process and methodology that uses structured natural language to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this natural language approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, project management, budget reviews, tax, laws, machine function).

  20. Language acquisition and implication for language change: A computational model.

    OpenAIRE

    Clark, Robert A.J.

    1997-01-01

    Computer modeling techniques, when applied to language acquisition problems, give an often unrealized insight into the diachronic change that occurs in language over successive generations. This paper shows that using assumptions about language acquisition to model successive generations of learners in a computer simulation, can have a drastic effect on the long term changes that occur in a language. More importantly, it shows that slight changes in the acquisition ...

  1. 台灣圖書館網頁標記語言正確性之探討 The Study of Webs Markup Language Validations of Libraries in Taiwan

    Directory of Open Access Journals (Sweden)

    Jiann-Cherng Shieh

    2009-06-01

    Full Text Available 圖書館網站是圖書館服務的延伸,圖書館網頁之正確性與否必然關係著資訊倫理中之可及性及正確性,因此突顯出圖書館網頁是否符合網頁設計標準規範對讀者服務之重要性,而其中網頁標記語言為網頁設計標準規範的一種,透過圖書館網頁正確性檢測可清楚揭露網頁符合標準規範的程度,以協助圖書館開發或維護符合標準規範之網頁。本研究利用W3C所提供之網頁標記語言檢測服務(Markup Validation Service)檢測158所大專院校圖書館與 4間公共圖書館網站首頁,藉此探討國內公共圖書館與大專院校圖書館網頁標記語言正確性之現況。結果發現大專院校圖書館與公共圖書館網站首頁標記語言正確性之檢測通過率為0,且錯誤數超過100個以上者有1/3強,顯示國內圖書館網頁標記語言之正確性亟待改善。本研究亦對於網頁檢測發生錯誤且W3C無修改建議之處,以範例方式提出解決之建議,供圖書館製作維護網頁之參考。 Library website is the extended service of library; the correctness of library webpage is certainly related to accessibility and correctness of information ethics and also will manifest the importance of the compliance of library webpage with webpage design standard for reader service. Markup validation service is one kind among webpage design standard, and testing the correctness of library webpage can reveal the extent of the compliance of webpage with the standard in order to assist libraries to develop or maintain webpage conforming to the standard. This study utilized markup validation service provided by W3C to test the correctness of code of library homepage of 158 colleges and 24 public libraries in order to investigate the current status of the correctness of markup on webpage on the websites of domestic college and public libraries. The results showed that 0% of homepage markup

  2. Student Modelling for Second Language Acquisition.

    Science.gov (United States)

    Bull, Susan

    1994-01-01

    Describes the student model of an intelligent computer-assisted language learning (CALL) system that is based on current theories in the field of second-language acquisition. Highlights include acquisition order of the target rules; language learning strategies; language transfer; language awareness; and student reactions. (Contains seven…

  3. Beliefs about Language Learning: The Horwitz Model.

    Science.gov (United States)

    Kuntz, Patricia S.

    Research on beliefs about second language learning based on a model designed by Elaine Horwitz is reviewed. The model is incorporated in the Beliefs About Language Learning Inventory (BALLI) developed for students of English as a Second Language, college students of commonly taught languages (French, German, Spanish), and college teachers of…

  4. The Control System Modeling Language

    CERN Document Server

    Zagar, K; Sekoranja, M; Tkacik, G; Vodovnik, A; Zagar, Klemen; Plesko, Mark; Sekoranja, Matej; Tkacik, Gasper; Vodovnik, Anze

    2001-01-01

    The well-known Unified Modeling Language (UML) describes software entities, such as interfaces, classes, operations and attributes, as well as relationships among them, e.g. inheritance, containment and dependency. The power of UML lies in Computer Aided Software Engineering (CASE) tools such as Rational Rose, which are also capable of generating software structures from visual object definitions and relations. UML also allows add-ons that define specific structures and patterns in order to steer and automate the design process. We have developed an add-on called Control System Modeling Language (CSML). It introduces entities and relationships that we know from control systems, such as "property" representing a single controllable point/channel, or an "event" specifying that a device is capable of notifying its clients through events. Entities can also possess CSML-specific characteristics, such as physical units and valid ranges for input parameters. CSML is independent of any specific language or technology...

  5. Hospital markup and operation outcomes in the United States.

    Science.gov (United States)

    Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M

    2016-07-01

    Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Contextual Information and Specific Language Models for Spoken Language Understanding

    CERN Document Server

    Baggia, P; Gerbino, E; Moisa, L M; Popovici, C; Baggia, Paolo; Danieli, Morena; Gerbino, Elisabetta; Moisa, Loreta M.; Popovici, Cosmin

    1999-01-01

    In this paper we explain how contextual expectations are generated and used in the task-oriented spoken language understanding system Dialogos. The hard task of recognizing spontaneous speech on the telephone may greatly benefit from the use of specific language models during the recognition of callers' utterances. By 'specific language models' we mean a set of language models that are trained on contextually appropriated data, and that are used during different states of the dialogue on the basis of the information sent to the acoustic level by the dialogue management module. In this paper we describe how the specific language models are obtained on the basis of contextual information. The experimental result we report show that recognition and understanding performance are improved thanks to the use of specific language models.

  7. Modeling Coevolution between Language and Memory Capacity during Language Origin.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.

  8. Modeling Coevolution between Language and Memory Capacity during Language Origin

    Science.gov (United States)

    Gong, Tao; Shuai, Lan

    2015-01-01

    Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876

  9. Formal Models of Language Learning.

    Science.gov (United States)

    Pinker, Steven

    1979-01-01

    Research addressing development of mechanistic models capable of acquiring languages on the basis of exposure to linguistic data is reviewed. Research focuses on major issues in developmental psycholinguistics--in particular, nativism and empiricism, the role of semantics and pragmatics, cognitive development, and the importance of simplified…

  10. Models of natural language understanding.

    Science.gov (United States)

    Bates, M

    1995-10-24

    This paper surveys some of the fundamental problems in natural language (NL) understanding (syntax, semantics, pragmatics, and discourse) and the current approaches to solving them. Some recent developments in NL processing include increased emphasis on corpus-based rather than example- or intuition-based work, attempts to measure the coverage and effectiveness of NL systems, dealing with discourse and dialogue phenomena, and attempts to use both analytic and stochastic knowledge. Critical areas for the future include grammars that are appropriate to processing large amounts of real language; automatic (or at least semi-automatic) methods for deriving models of syntax, semantics, and pragmatics; self-adapting systems; and integration with speech processing. Of particular importance are techniques that can be tuned to such requirements as full versus partial understanding and spoken language versus text. Portability (the ease with which one can configure an NL system for a particular application) is one of the largest barriers to application of this technology.

  11. Better Language Models with Model Merging

    CERN Document Server

    Brants, T

    1996-01-01

    This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.

  12. The OMG Modelling Language (SYSML)

    Science.gov (United States)

    Hause, M.

    2007-08-01

    On July 6th 2006, the Object Management Group (OMG) announced the adoption of the OMG Systems Modeling Language (OMG SysML). The SysML specification was in response to the joint Request for Proposal issued by the OMG and INCOSE (the International Council on Systems Engineering) for a customized version of UML 2, designed to address the specific needs of system engineers. SysML is a visual modeling language that extends UML 2 in order to support the specification, analysis, design, verification and validation of complex systems. This paper will look at the background of SysML and summarize the SysML specification including the modifications to UML 2.0, along with the new requirement and parametric diagrams. It will also show how SysML artifacts can be used to specify the requirements for other solution spaces such as software and hardware to provide handover to other disciplines.

  13. Chemical markup, XML, and the World Wide Web. 5. Applications of chemical metadata in RSS aggregators.

    Science.gov (United States)

    Murray-Rust, Peter; Rzepa, Henry S; Williamson, Mark J; Willighagen, Egon L

    2004-01-01

    Examples of the use of the RSS 1.0 (RDF Site Summary) specification together with CML (Chemical Markup Language) to create a metadata based alerting service termed CMLRSS for molecular content are presented. CMLRSS can be viewed either using generic software or with modular opensource chemical viewers and editors enhanced with CMLRSS modules. We discuss the more automated use of CMLRSS as a component of a World Wide Molecular Matrix of semantically rich chemical information.

  14. Language Learning Strategies and Its Training Model

    Science.gov (United States)

    Liu, Jing

    2010-01-01

    This paper summarizes and reviews the literature regarding language learning strategies and it's training model, pointing out the significance of language learning strategies to EFL learners and an applicable and effective language learning strategies training model, which is beneficial both to EFL learners and instructors, is badly needed.

  15. Linguistics: Modelling the dynamics of language death

    Science.gov (United States)

    Abrams, Daniel M.; Strogatz, Steven H.

    2003-08-01

    Thousands of the world's languages are vanishing at an alarming rate, with 90% of them being expected to disappear with the current generation. Here we develop a simple model of language competition that explains historical data on the decline of Welsh, Scottish Gaelic, Quechua (the most common surviving indigenous language in the Americas) and other endangered languages. A linguistic parameter that quantifies the threat of language extinction can be derived from the model and may be useful in the design and evaluation of language-preservation programmes.

  16. Modeling Languages Refine Vehicle Design

    Science.gov (United States)

    2009-01-01

    Cincinnati, Ohio s TechnoSoft Inc. is a leading provider of object-oriented modeling and simulation technology used for commercial and defense applications. With funding from Small Business Innovation Research (SBIR) contracts issued by Langley Research Center, the company continued development on its adaptive modeling language, or AML, originally created for the U.S. Air Force. TechnoSoft then created what is now known as its Integrated Design and Engineering Analysis Environment, or IDEA, which can be used to design a variety of vehicles and machinery. IDEA's customers include clients in green industries, such as designers for power plant exhaust filtration systems and wind turbines.

  17. Model-based development of robotic systems and services in construction robotics

    DEFF Research Database (Denmark)

    Schlette, Christian; Roßmann, Jürgen

    2017-01-01

    More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards for proj......More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards...

  18. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  19. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  20. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  1. RELISH LMF: unlocking the full power of the Lexical Markup Framework

    NARCIS (Netherlands)

    Windhouwer, Menzo

    2014-01-01

    In 2008 ISO TC 37 (ISO 24613:2008) published the Lexical Markup Framework (LMF) standard (www.lexicalmarkupframework.org). This standard was based on input of many experts in the field a core model and a whole series of extensions were specified in the form of UML class diagrams. For a specific

  2. Trade Reforms, Mark-Ups and Bargaining Power of Workers: The ...

    African Journals Online (AJOL)

    Optiplex 7010 Pro

    model of mark-up with labor bargaining power was estimated using random effects and LDPDM. ... article and also Dar es Salaam University and African Economic Research. Consortium. .... up with a positive association between workers' rent sharing parameter and ... individual contract or through collective agreements.

  3. Semantic Markup for Literary Scholars: How Descriptive Markup Affects the Study and Teaching of Literature.

    Science.gov (United States)

    Campbell, D. Grant

    2002-01-01

    Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)

  4. Domain-specific markup languages and descriptive metadata: their functions in scientific resource discoveryLinguagens de marcação específicas por domínio e metadados descritivos: funções para a descoberta de recursos científicos

    Directory of Open Access Journals (Sweden)

    Marcia Lei Zeng

    2010-01-01

    Full Text Available While metadata has been a strong focus within information professionals‟ publications, projects, and initiatives during the last two decades, a significant number of domain-specific markup languages have also been developing on a parallel path at the same rate as metadata standards; yet, they do not receive comparable attention. This essay discusses the functions of these two kinds of approaches in scientific resource discovery and points out their potential complementary roles through appropriate interoperability approaches.Enquanto os metadados tiveram grande foco em publicações, projetos e iniciativas dos profissionais da informação durante as últimas duas décadas, um número significativo de linguagens de marcação específicas por domínio também se desenvolveram paralelamente a uma taxa equivalente aos padrões de metadados; mas ainda não recebem atenção comparável. Esse artigo discute as funções desses dois tipos de abordagens na descoberta de recursos científicos e aponta papéis potenciais e complementares por meio de abordagens de interoperabilidade apropriadas.

  5. A Review of Process Modeling Language Paradigms

    Institute of Scientific and Technical Information of China (English)

    MA Qin-hai; GUAN Zhi-min; LI Ying; ZHAO Xi-nan

    2002-01-01

    Process representation or modeling plays an important role in business process engineering.Process modeling languages can be evaluated by the extent to which they provide constructs useful for representing and reasoning about the aspects of a process, and subsequently are chosen for a certain purpose.This paper reviews process modeling language paradigms and points out their advantages and disadvantages.

  6. Two Language Models Using Chinese Semantic Parsing

    Institute of Scientific and Technical Information of China (English)

    LI Mingqin; WANG Xia; WANG Zuoying

    2006-01-01

    This paper presents two language models that utilize a Chinese semantic dependency parsing technique for speech recognition. The models are based on a representation of the Chinese semantic structure with dependency relations. A semantic dependency parser was described to automatically tag the semantic class for each word with 90.9% accuracy and parse the sentence semantic dependency structure with 75.8% accuracy. The Chinese semantic parsing technique was applied to structure language models to develop two language models, the semantic dependency model (SDM) and the headword trigram model (HTM). These language models were evaluated using Chinese speech recognition. The experiments show that both models outperform the word trigram model in terms of the Chinese character recognition error rate.

  7. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  8. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  9. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Maddix Jason

    2010-07-01

    Full Text Available Abstract Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007. Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals

  10. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan.

    Science.gov (United States)

    Waning, Brenda; Maddix, Jason; Soucy, Lyne

    2010-07-13

    Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative

  11. Modelling Typical Online Language Learning Activity

    Science.gov (United States)

    Montoro, Carlos; Hampel, Regine; Stickler, Ursula

    2014-01-01

    This article presents the methods and results of a four-year-long research project focusing on the language learning activity of individual learners using online tasks conducted at the University of Guanajuato (Mexico) in 2009-2013. An activity-theoretical model (Blin, 2010; Engeström, 1987) of the typical language learning activity was used to…

  12. Microscopic Abrams Strogatz model of language competition

    Science.gov (United States)

    Stauffer, Dietrich; Castelló, Xavier; Eguíluz, Víctor M.; San Miguel, Maxi

    2007-02-01

    The differential equation of Abrams and Strogatz for the competition between two languages is compared with agent-based Monte Carlo simulations for fully connected networks as well as for lattices in one, two and three dimensions, with up to 109 agents. In the case of socially equivalent languages, agent-based models and a mean-field approximation give grossly different results.

  13. Connectionist Models: Implications in Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Farid Ghaemi

    2011-10-01

    Full Text Available In language acquisition, ‘Emergentists’ claim that simple learning mechanisms, of the kind attested elsewhere in cognition, are sufficient to bring about the emergence of complex language
    representations (Gregg, 2003. Connectionist model is one of the models among others proposed by emergentists. This paper attempts to clarify the basic assumptions of this model, its advantages, and the criticisms leveled at it.

  14. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    Science.gov (United States)

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  15. Presheaf Models for CCS-like Languages

    DEFF Research Database (Denmark)

    Cattani, Gian Luca; Winskel, Glynn

    2003-01-01

    for a general process language, in which CCS and related languages are easily encoded. The results are then transferred to traditional models for processes. By first establishing the congruence results for presheaf models, abstract, general proofs of congruence properties can be provided and the awkwardness...... caused through traditional models not always possessing the cartesian liftings, used in the breakdown of process operations, are side stepped. The abstract results are applied to show that hereditary history-preserving bisimulation is a congruence for CCS-like languages to which is added a refinement...

  16. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  17. Wave equation modelling using Julia programming language

    Science.gov (United States)

    Kim, Ahreum; Ryu, Donghyun; Ha, Wansoo

    2016-04-01

    Julia is a young high-performance dynamic programming language for scientific computations. It provides an extensive mathematical function library, a clean syntax and its own parallel execution model. We developed 2d wave equation modeling programs using Julia and C programming languages and compared their performance. We used the same modeling algorithm for the two modeling programs. We used Julia version 0.3.9 in this comparison. We declared data type of function arguments and used inbounds macro in the Julia program. Numerical results showed that the C programs compiled with Intel and GNU compilers were faster than Julia program, about 18% and 7%, respectively. Taking the simplicity of dynamic programming language into consideration, Julia can be a novel alternative of existing statically typed programming languages.

  18. Improved world-based language model

    Institute of Scientific and Technical Information of China (English)

    CHEN Yong(陈勇); CHAN Kwok-ping

    2004-01-01

    In order to construct a good language model used in the postprocessing phase of a recognition system.A smoothing technique must be used to solve the data sparseness problem. In the past, many smoothing techniques have been proposed. Among them, Katz' s smoothing technique is well known. However, we found that a weakness with the Katz' s smoothing technique. We improved this approach by incorporating one kind of special Chinese language information and Chinese word class information into the language model. We tested the new smoothing technique with a Chinese character recognition system. The experimental result showed that a better performance can be achieved.

  19. Probabilistic models of language processing and acquisition.

    Science.gov (United States)

    Chater, Nick; Manning, Christopher D

    2006-07-01

    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.

  20. Extending the Compensatory Model of Second Language Reading

    Science.gov (United States)

    McNeil, Levi

    2012-01-01

    Bernhardt (2005) proposed a compensatory model of second language reading. This model predicted that 50% of second language (L2) reading scores are attributed to second language knowledge and first-language (L1) reading ability. In this model, these two factors compensate for deficiencies in each other. Although this model explains a significant…

  1. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  2. Domain-Specific Modelling Languages in Bigraphs

    DEFF Research Database (Denmark)

    Perrone, Gian David

    " of models, in order to improve the utility of the models we build, and to ease the process of model construction by moving the languages we use to express such models closer to their respective domains. This thesis is concerned with the study of bigraphical reactive systems as a host for domain......-specic modelling languages. We present a number of novel technical developments, including a new complete meta-calculus presentation of bigraphical reactive systems, an abstract machine that instantiates to an abstract machine for any instance calculus, and a mechanism for dening declarative sorting predicates...... that always give rise to wellbehaved sortings. We explore bigraphical renement relations that permit formalisation of the relationship between dierent languages instantiated as bigraphical reactive systems. We detail a prototype verication tool for instance calculi, and provide a tractable heuristic...

  3. Incorporating POS Tagging into Language Modeling

    CERN Document Server

    Heeman, P A; Heeman, Peter A.; Allen, James F.

    1997-01-01

    Language models for speech recognition tend to concentrate solely on recognizing the words that were spoken. In this paper, we redefine the speech recognition problem so that its goal is to find both the best sequence of words and their syntactic role (part-of-speech) in the utterance. This is a necessary first step towards tightening the interaction between speech recognition and natural language understanding.

  4. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  5. A Bit Progress on Word—Based Language Model

    Institute of Scientific and Technical Information of China (English)

    陈勇; 陈国评

    2003-01-01

    A good language model is essential to a postprocessing algorithm for recognition systems. In the past, researchers have pre-sented various language models, such as character based language models, word based language model, syntactical rules :language mod-el, hybrid models, etc. The word N-gram model is by far an effective and efficient model, but one has to address the problem of data sparseness in establishing the model. Katz and Kneser et al. respectively presented effective remedies to solve this challenging prob-lem. In this study, we proposed an improvement to their methods by incorporating Chinese language-specific information or Chinese word class information into the system.

  6. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  7. Verified Visualisation of Textual Modelling Languages

    DEFF Research Database (Denmark)

    Fairmichael, Fintan; Kiniry, Joseph Roland

    Many modelling languages have both a textual and a graph- ical form. The relationship between these two forms ought to be clear and concrete, but is instead commonly underspecified, weak, and infor- mal. Further, processes and tool support for modelling often do not treat both forms as first-clas...

  8. Verified Visualisation of Textual Modelling Languages

    DEFF Research Database (Denmark)

    Fairmichael, Fintan; Kiniry, Joseph Roland

    2011-01-01

    Many modelling languages have both a textual and a graph- ical form. The relationship between these two forms ought to be clear and concrete, but is instead commonly underspecified, weak, and infor- mal. Further, processes and tool support for modelling often do not treat both forms as first-clas...

  9. Modeling Transient States in Language Change

    NARCIS (Netherlands)

    Postma, G.J.; Truswell, Robert; Mattieu, Eric

    2015-01-01

    Models of language change may include, apart from an initial state and a terminal state, an intermediate transient state T. Building further on they Failed Change Model (Postma 2010) that ties the dynamics of the transient state T to the dynamics of the overall change A → B, we present an generalize

  10. Changes in latent fingerprint examiners' markup between analysis and comparison.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%).

  11. Models of Integrating Content and Language Learning

    Directory of Open Access Journals (Sweden)

    Jiaying Howard

    2006-01-01

    Full Text Available Content-based instruction has become increasingly recognized as a means of developing both linguistic and content ability. Drawing on educational practices at the Monterey Institute of International Studies, this paper analyzes conditions that encourage the integration of language and content learning, presents various content-based instructional models-including those that have been developed at the Monterey Institute, and examines the decisionmaking process of selecting a content-based instructional model for a particular educational setting. Discussions center on making decisions that are most likely to accelerate the growth of foreign language proficiency and the acquisition of content knowledge.

  12. PML:PAGE-OM Markup Language: About PAGE-OM [

    Lifescience Database Archive (English)

    Full Text Available nerating huge amounts of data, which typically must be shared amongst many collaborators and researchers. To... store and use such data efficiently, it is paramount that biomedical researchers

  13. The semantics of Chemical Markup Language (CML): dictionaries and conventions.

    Science.gov (United States)

    Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens

    2011-10-14

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  14. The semantics of Chemical Markup Language (CML: dictionaries and conventions

    Directory of Open Access Journals (Sweden)

    Murray-Rust Peter

    2011-10-01

    Full Text Available Abstract The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation. Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  15. PML:PAGE-OM Markup Language: About IBIC [

    Lifescience Database Archive (English)

    Full Text Available bility Conference (IBIC) to have international discussions on the standardization of genomic variation data ...st for proposal for the standardization of genomic variation data description form in January 2004. Because the 1st international... conference was fruitful, we decided such international conference to be executed conti

  16. Interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-07-01

    Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.

  17. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  18. Estimation of service sector mark-ups determined by structural reform indicators

    OpenAIRE

    Anna Thum-Thysen; Erik Canton

    2015-01-01

    This paper analyses the impact of regulation on product sector mark-ups across the EU and confirms that less strict regulation tends to foster competition and reduce mark-up rates. The results also show that mark-ups in most EU countries and sectors have been declining over the last 15 years as a result of competition-friendly reforms. The paper also casts light on which areas of regulation are most important for mark-ups in individual sectors.

  19. Formal specification with the Java modeling language

    NARCIS (Netherlands)

    Huisman, Marieke; Ahrendt, Wolfgang; Grahl, Daniel; Hentschel, Martin; Ahrendt, Wolfgang; Beckert, Bernhard; Bubel, Richard; Hähnle, Reiner; Schmitt, Peter H.; Ulbrich, Mattoas

    2016-01-01

    This text is a general, self contained, and tool independent introduction into the Java Modeling Language, JML. It appears in a book about the KeY approach and tool, because JML is the dominating starting point of KeY style Java verification. However, this chapter does not depend on KeY, nor any

  20. Aligning Grammatical Theories and Language Processing Models

    Science.gov (United States)

    Lewis, Shevaun; Phillips, Colin

    2015-01-01

    We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…

  1. JSBML 1.0: providing a smorgasbord of options to encode systems biology models

    DEFF Research Database (Denmark)

    Rodriguez, Nicolas; Thomas, Alex; Watanabe, Leandro

    2015-01-01

    JSBML, the official pure Java programming library for the Systems Biology Markup Language (SBML) format, has evolved with the advent of different modeling formalisms in systems biology and their ability to be exchanged and represented via extensions of SBML. JSBML has matured into a major, active...

  2. Language Models for Handwritten Short Message Services

    CERN Document Server

    Prochasson, Emmanuel Ep; Morin, Emmanuel

    2009-01-01

    Handwriting is an alternative method for entering texts composing Short Message Services. However, a whole new language features the texts which are produced. They include for instance abbreviations and other consonantal writing which sprung up for time saving and fashion. We have collected and processed a significant number of such handwriting SMS, and used various strategies to tackle this challenging area of handwriting recognition. We proposed to study more specifically three different phenomena: consonant skeleton, rebus, and phonetic writing. For each of them, we compare the rough results produced by a standard recognition system with those obtained when using a specific language model.

  3. An adaptive contextual quantum language model

    Science.gov (United States)

    Li, Jingfei; Zhang, Peng; Song, Dawei; Hou, Yuexian

    2016-08-01

    User interactions in search system represent a rich source of implicit knowledge about the user's cognitive state and information need that continuously evolves over time. Despite massive efforts that have been made to exploiting and incorporating this implicit knowledge in information retrieval, it is still a challenge to effectively capture the term dependencies and the user's dynamic information need (reflected by query modifications) in the context of user interaction. To tackle these issues, motivated by the recent Quantum Language Model (QLM), we develop a QLM based retrieval model for session search, which naturally incorporates the complex term dependencies occurring in user's historical queries and clicked documents with density matrices. In order to capture the dynamic information within users' search session, we propose a density matrix transformation framework and further develop an adaptive QLM ranking model. Extensive comparative experiments show the effectiveness of our session quantum language models.

  4. Modeling social learning of language and skills.

    Science.gov (United States)

    Vogt, Paul; Haasdijk, Evert

    2010-01-01

    We present a model of social learning of both language and skills, while assuming—insofar as possible—strict autonomy, virtual embodiment, and situatedness. This model is built by integrating various previous models of language development and social learning, and it is this integration that, under the mentioned assumptions, provides novel challenges. The aim of the article is to investigate what sociocognitive mechanisms agents should have in order to be able to transmit language from one generation to the next so that it can be used as a medium to transmit internalized rules that represent skill knowledge. We have performed experiments where this knowledge solves the familiar poisonous-food problem. Simulations reveal under what conditions, regarding population structure, agents can successfully solve this problem. In addition to issues relating to perspective taking and mutual exclusivity, we show that agents need to coordinate interactions so that they can establish joint attention in order to form a scaffold for language learning, which in turn forms a scaffold for the learning of rule-based skills. Based on these findings, we conclude by hypothesizing that social learning at one level forms a scaffold for the social learning at another, higher level, thus contributing to the accumulation of cultural knowledge.

  5. A Visual Meta-Language for Generic Modeling

    Science.gov (United States)

    2007-11-02

    since models provide a communication mechanism. Modeling languages can be textual or visual. Kim Marriot and Bernd Meyer describe visual languages as...150, January 1997, USA. [MAR98A] Marriot , Kim, Bernd Meyer, Visual Language Theory, Articles of Workshop on Theory of Visual Languages (TVL ’96...1998, Spring-Verlag New York, USA. [MAR98B] Marriot , Kim, Bernd Meyer, Kent B. Wittenburg, “A Survey of Visual Language Specification and

  6. On the interoperability of model-to-model transformation languages

    NARCIS (Netherlands)

    Jouault, Frédéric; Kurtev, Ivan

    2007-01-01

    Transforming models is a crucial activity in Model Driven Engineering (MDE). With the adoption of the OMG QVT standard for model transformation languages, it is anticipated that the experience in applying model transformations in various domains will increase. However, the QVT standard is just one p

  7. Statistical Language Model for Chinese Text Proofreading

    Institute of Scientific and Technical Information of China (English)

    张仰森; 曹元大

    2003-01-01

    Statistical language modeling techniques are investigated so as to construct a language model for Chinese text proofreading. After the defects of n-gram model are analyzed, a novel statistical language model for Chinese text proofreading is proposed. This model takes full account of the information located before and after the target word wi, and the relationship between un-neighboring words wi and wj in linguistic environment(LE). First, the word association degree between wi and wj is defined by using the distance-weighted factor, wj is l words apart from wi in the LE, then Bayes formula is used to calculate the LE related degree of word wi, and lastly, the LE related degree is taken as criterion to predict the reasonability of word wi that appears in context. Comparing the proposed model with the traditional n-gram in a Chinese text automatic error detection system, the experiments results show that the error detection recall rate and precision rate of the system have been improved.

  8. A simple branching model that reproduces language family and language population distributions

    Science.gov (United States)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  9. Clone Detection for Graph-Based Model Transformation Languages

    DEFF Research Database (Denmark)

    Strüber, Daniel; Plöger, Jennifer; Acretoaie, Vlad

    2016-01-01

    has been proposed for programming and modeling languages; yet no specific ones have emerged for model transformation languages. In this paper, we explore clone detection for graph-based model transformation languages. We introduce potential use cases for such techniques in the context of constructive...

  10. CML: the commonKADS conceptual modelling language

    NARCIS (Netherlands)

    Schreiber, G.; Wielinga, B.J.; Akkermans, J.M.; Velde, van de W.; Anjewierden, A.A.

    1994-01-01

    We present a structured language for the specification of knowledge models according to the CommonKADS methodology. This language is called CML (Conceptual Modelling Language) and provides both a structured textual notation and a diagrammatic notation for expertise models. The use of our CML is illu

  11. Some Initial Reflections on XML Markup for an Image-Based Electronic Edition of the Brooklyn Museum Aramaic Papyri

    Directory of Open Access Journals (Sweden)

    F.W. Dobbs-Allsopp

    2016-04-01

    Full Text Available A collaborative project of the Brooklyn Museum and a number of allied institutions, including Princeton Theological Seminary and West Semitic Research, the Digital Brooklyn Museum Aramaic Papyri (DBMAP is to be both an image-based electronic facsimile edition of the important collection of Aramaic papyri from Elephantine housed at the Brooklyn Museum and an archival resource to support ongoing research on these papyri and the public dissemination of knowledge about them. In the process of building out a (partial prototype of the edition, to serve as a proof of concept, we have discovered little field-specific discussion that might guide our markup decisions. Consequently, here our chief ambition is to initiate such a conversation. After a brief overview of DBMAP, we offer some initial reflection on and assessment of XML markup schemes specifically for Semitic texts from the ancient Near East that comply with TEI, CSE, and MEP guidelines. We take as our example BMAP 3 (=TAD B3.4 and we focus on markup as pertains to the editorial transcription of this documentary text and to the linguistic analysis of the text’s language.

  12. A model of the mechanisms of language extinction and revitalization strategies to save endangered languages.

    Science.gov (United States)

    Fernando, Chrisantha; Valijärvi, Riitta-Liisa; Goldstein, Richard A

    2010-02-01

    Why and how have languages died out? We have devised a mathematical model to help us understand how languages go extinct. We use the model to ask whether language extinction can be prevented in the future and why it may have occurred in the past. A growing number of mathematical models of language dynamics have been developed to study the conditions for language coexistence and death, yet their phenomenological approach compromises their ability to influence language revitalization policy. In contrast, here we model the mechanisms underlying language competition and look at how these mechanisms are influenced by specific language revitalization interventions, namely, private interventions to raise the status of the language and thus promote language learning at home, public interventions to increase the use of the minority language, and explicit teaching of the minority language in schools. Our model reveals that it is possible to preserve a minority language but that continued long-term interventions will likely be necessary. We identify the parameters that determine which interventions work best under certain linguistic and societal circumstances. In this way the efficacy of interventions of various types can be identified and predicted. Although there are qualitative arguments for these parameter values (e.g., the responsiveness of children to learning a language as a function of the proportion of conversations heard in that language, the relative importance of conversations heard in the family and elsewhere, and the amplification of spoken to heard conversations of the high-status language because of the media), extensive quantitative data are lacking in this field. We propose a way to measure these parameters, allowing our model, as well as others models in the field, to be validated.

  13. Models of Integrating Content and Language Learning

    OpenAIRE

    Jiaying Howard

    2006-01-01

    Content-based instruction has become increasingly recognized as a means of developing both linguistic and content ability. Drawing on educational practices at the Monterey Institute of International Studies, this paper analyzes conditions that encourage the integration of language and content learning, presents various content-based instructional models-including those that have been developed at the Monterey Institute, and examines the decisionmaking process of selecting...

  14. MODEL NATIVIS LANGUAGE ACQUISITION DEVICE (SEBUAH TEORI PEMEROLEHAN BAHASA

    Directory of Open Access Journals (Sweden)

    Mamluatul Hasanah

    2011-10-01

    Full Text Available The ability of using mother tongue has been possessed by every child. They can master the language without getting specific education. In a short time a child has mastered the language to communicate with others. There are many theories of language acquisition. One of them that still exists is The Native Model of Language Acquisition (LAD. This theory was pioneered by Noam Chomsky. In this language naturally. This ability develops automatically when the language is used is Language Acquisition Device (LAD. LAD constitutes a hypothesis of feature of grammatical rules used progressively by a child in accordance with his psychological development.

  15. Language Preference and Time Allocation in the Joint Languages Diversification Model.

    Science.gov (United States)

    Kenning, Marie-Madeleine

    1994-01-01

    Presents a follow-up study of a joint languages diversification model. The research focuses on the evolution of the relative popularity of the three languages involved in the scheme (French, German, and Italian) and the impact of a timetable that allocates different amounts of time to two languages with a switch halfway through the year. (five…

  16. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  17. Towards a Semantics for XML Markup%XML标记的语义

    Institute of Scientific and Technical Information of China (English)

    艾兰•瑞尼尔; 戴维德•杜宾; 斯芬伯格•麦奎因; 克劳斯•惠特福德; 王晓光(译); 王俊芳(译)

    2016-01-01

    Although XML Document Type Definitions provide a mechanism for specifying, in machine-readable form, the syntax of an XML markup language, there is no comparable mechanism for specifying the semantics of an XML vocabulary. That is, there is no way to characterize the meaning of XML markup so that the facts and relationships represented by the occurrence of XML constructs can be explicitly, comprehensively, and mechanical y identified. This has serious practical and theoretical consequences. On the positive side, XML constructs can be assigned arbitrary semantics and used in application areas not foreseen by the original designers. On the less positive side, both content developers and application engineers must rely upon prose documentation, or, worse, conjectures about the intention of the markup language designer — a process that is time-consuming, error-prone, incomplete, and unverifiable, even when the language designer properly documents the language. In addition, the lack of a substantial body of research in markup semantics means that digital document processing is undertheorized as an engineering application area. Although there are some related projects underway (XML Schema, RDF, the Semantic Web) which provide relevant results, none of these projects directly and comprehensively address the core problems of XML markup semantics. This paper (i) summarizes the history of the concept of markup meaning, (i ) characterizes the specific problems that motivate the need for a formal semantics for XML and (i i) describes an ongoing research project :the BECHAMEL Markup Semantics Project —that is attempting to develop such a semantics.%尽管 XML 文档类型定义提供了一种机器可读形式的、能够说明 XML 语言语法的机制,但目前并没有类似的机制来指定 XML 词汇表的具体语义。这意味着没办法说明 XML 标记的意义,由 XML 形式呈现的事实和关系无法清晰、全面和规范地定义。这在实践和

  18. Self-Organizing Map Models of Language Acquisition

    Directory of Open Access Journals (Sweden)

    Ping eLi

    2013-11-01

    Full Text Available Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic PDP architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development.

  19. Using descriptive mark-up to formalize translation quality assessment

    CERN Document Server

    Kutuzov, Andrey

    2008-01-01

    The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.

  20. A model for BPEL-like languages

    Institute of Scientific and Technical Information of China (English)

    HE Jifeng; ZHU Huibiao; PU Geguang

    2007-01-01

    Web service is increasingly being applied in solving many universal interoperability problems.Business Process Execution Language (BPEL)is a de facto standard for specifying the behavior of business processes.It contains several interesting features,including scope-based compensation,fault handling and shared-labels for synchronization.In this paper we explore an observation-oriented model for BPEL-like languages,which can be used to study program equivalence.The execution states of a program are divided into five types:completed state,waiting state and divergent state,as well as error state and undo state.The last two states are especially for dealing with compensation and fault handling.Based on the formalized model,a set of algebraic laws is investigated,including traditional laws and BPEL featured laws.The concept of guarded choice is also introduced in this model,which can be used to support the transformation of a parallel program into the form of guarded choice.Two special scopes are introduced:canonical structure and compensation structure,which are used to eliminate undo and compensation construct from finite processes.

  1. Computational modelling of evolution: ecosystems and language

    CERN Document Server

    Lipowski, Adam

    2008-01-01

    Recently, computational modelling became a very important research tool that enables us to study problems that for decades evaded scientific analysis. Evolutionary systems are certainly examples of such problems: they are composed of many units that might reproduce, diffuse, mutate, die, or in some cases for example communicate. These processes might be of some adaptive value, they influence each other and occur on various time scales. That is why such systems are so difficult to study. In this paper we briefly review some computational approaches, as well as our contributions, to the evolution of ecosystems and language. We start from Lotka-Volterra equations and the modelling of simple two-species prey-predator systems. Such systems are canonical example for studying oscillatory behaviour in competitive populations. Then we describe various approaches to study long-term evolution of multi-species ecosystems. We emphasize the need to use models that take into account both ecological and evolutionary processe...

  2. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto;

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip...

  3. VMTL: a language for end-user model transformation

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2016-01-01

    , these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently...... hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following...

  4. A FORMAL SPECIFICATION LANGUAGE FOR DYNAMIC STRAND SPACE MODEL

    Institute of Scientific and Technical Information of China (English)

    刘东喜; 李晓勇; 白英彩

    2002-01-01

    Specification language is used to provide enough information for the model of the cryptographic protocol. This paper first extends strand space model to dynamic strand model, and then a formal specification language for this model is defined by using BNF grammar. Compared with those in literatures, it is simpler because of only concerning the algebraic properties of cryptographic protocols.

  5. Are the determinants of markup size industry-specific? The case of Slovenian manufacturing firms

    Directory of Open Access Journals (Sweden)

    Ponikvar Nina

    2011-01-01

    Full Text Available The aim of this paper is to identify factors that affect the pricing policy in Slovenian manufacturing firms in terms of the markup size and, most of all, to explicitly account for the possibility of differences in pricing procedures among manufacturing industries. Accordingly, the analysis of the dynamic panel is carried out on an industry-by-industry basis, allowing the coefficients on the markup determinants to vary across industries. We find that the oligopoly theory of markup determination for the most part holds for the manufacturing sector as a whole, although large variability in markup determinants exists across industries within the Slovenian manufacturing. Our main conclusion is that each industry should be investigated separately in detail in order to assess the precise role of markup factors in the markup-determination process.

  6. On the Computational Expressiveness of Model Transformation Languages

    DEFF Research Database (Denmark)

    Al-Sibahi, Ahmad Salim

    2015-01-01

    Common folklore in the model transformation community dictates that most transformation languages are Turing-complete. It is however seldom that a proof or an explanation is provided on why such property holds; due to the widely different features and execution models in these language, it is not......Common folklore in the model transformation community dictates that most transformation languages are Turing-complete. It is however seldom that a proof or an explanation is provided on why such property holds; due to the widely different features and execution models in these language......, it is not immediately obvious what their computational expressiveness is. In this paper we present an analysis that clarifies the computational expressiveness of a large number of model transformation languages. The analysis confirms the folklore for all model transformation languages, except the bidirectional ones...

  7. Querying and Serving N-gram Language Models with Python

    Directory of Open Access Journals (Sweden)

    2009-06-01

    Full Text Available Statistical n-gram language modeling is a very important technique in Natural Language Processing (NLP and Computational Linguistics used to assess the fluency of an utterance in any given language. It is widely employed in several important NLP applications such as Machine Translation and Automatic Speech Recognition. However, the most commonly used toolkit (SRILM to build such language models on a large scale is written entirely in C++ which presents a challenge to an NLP developer or researcher whose primary language of choice is Python. This article first provides a gentle introduction to statistical language modeling. It then describes how to build a native and efficient Python interface (using SWIG to the SRILM toolkit such that language models can be queried and used directly in Python code. Finally, it also demonstrates an effective use case of this interface by showing how to leverage it to build a Python language model server. Such a server can prove to be extremely useful when the language model needs to be queried by multiple clients over a network: the language model must only be loaded into memory once by the server and can then satisfy multiple requests. This article includes only those listings of source code that are most salient. To conserve space, some are only presented in excerpted form. The complete set of full source code listings may be found in Volume 1 of The Python Papers Source Codes Journal.

  8. A Tool for Model-Based Language Specification

    CERN Document Server

    Quesada, Luis; Cubero, Juan-Carlos

    2011-01-01

    Formal languages let us define the textual representation of data with precision. Formal grammars, typically in the form of BNF-like productions, describe the language syntax, which is then annotated for syntax-directed translation and completed with semantic actions. When, apart from the textual representation of data, an explicit representation of the corresponding data structure is required, the language designer has to devise the mapping between the suitable data model and its proper language specification, and then develop the conversion procedure from the parse tree to the data model instance. Unfortunately, whenever the format of the textual representation has to be modified, changes have to propagated throughout the entire language processor tool chain. These updates are time-consuming, tedious, and error-prone. Besides, in case different applications use the same language, several copies of the same language specification have to be maintained. In this paper, we introduce a model-based parser generat...

  9. Towards a Model of Language Attrition: Neurobiological and Psychological Contributions

    OpenAIRE

    Yoshitomi, Asako

    1992-01-01

    Research in L2 attrition is a relatively new enterprise which is in need of a comprehensive theory/model. This paper presents a tentative cognitive-psychological model of language attrition, which draws on information from studies in L2 attrition, neurobiology, and psychology. This is to demonstrate that a model based on consideration of the brain has the potential of providing a plausible account of the process of language attrition, as well as the process of language acquisition.

  10. The field representation language.

    Science.gov (United States)

    Tsafnat, Guy

    2008-02-01

    The complexity of quantitative biomedical models, and the rate at which they are published, is increasing to a point where managing the information has become all but impossible without automation. International efforts are underway to standardise representation languages for a number of mathematical entities that represent a wide variety of physiological systems. This paper presents the Field Representation Language (FRL), a portable representation of values that change over space and/or time. FRL is an extensible mark-up language (XML) derivative with support for large numeric data sets in Hierarchical Data Format version 5 (HDF5). Components of FRL can be reused through unified resource identifiers (URI) that point to external resources such as custom basis functions, boundary geometries and numerical data. To demonstrate the use of FRL as an interchange we present three models that study hyperthermia cancer treatment: a fractal model of liver tumour microvasculature; a probabilistic model simulating the deposition of magnetic microspheres throughout it; and a finite element model of hyperthermic treatment. The microsphere distribution field was used to compute the heat generation rate field around the tumour. We used FRL to convey results from the microsphere simulation to the treatment model. FRL facilitated the conversion of the coordinate systems and approximated the integral over regions of the microsphere deposition field.

  11. The DSD Schema Language

    DEFF Research Database (Denmark)

    Klarlund, Nils; Møller, Anders; Schwartzbach, Michael Ignatieff

    2002-01-01

    XML (Extensible Markup Language), a linear syntax for trees, has gathered a remarkable amount of interest in industry. The acceptance of XML opens new venues for the application of formal methods such as specification of abstract syntax tree sets and tree transformations. A user domain may...

  12. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  13. Towards a Model of Language Attrition: Neurobiological and Psychological Contributions.

    Science.gov (United States)

    Yoshitomi, Asako

    1992-01-01

    Presents a tentative cognitive-psychological model of language attrition, which draws on information from studies in second language attrition, neurobiology and psychology. Notes that this model is presented to demonstrate that a model based on consideration of the brain has the potential of providing a plausible account of the process of language…

  14. Toward Integration: An Instructional Model of Science and Academic Language

    Science.gov (United States)

    Silva, Cecilia; Weinburgh, Molly; Malloy, Robert; Smith, Kathy Horak; Marshall, Jenesta Nettles

    2012-01-01

    In this article, the authors outline an instructional model that can be used to optimize science and language learning in the classroom. The authors have developed the 5R instructional model (Weinburgh & Silva, 2010) to support teachers as they integrate academic language into content instruction. The model combines five strategies already…

  15. Modeling socioeconomic status effects on language development.

    Science.gov (United States)

    Thomas, Michael S C; Forrester, Neil A; Ronald, Angelica

    2013-12-01

    Socioeconomic status (SES) is an important environmental predictor of language and cognitive development, but the causal pathways by which it operates are unclear. We used a computational model of development to explore the adequacy of manipulations of environmental information to simulate SES effects in English past-tense acquisition, in a data set provided by Bishop (2005). To our knowledge, this is the first application of computational models of development to SES. The simulations addressed 3 new challenges: (a) to combine models of development and individual differences in a single framework, (b) to expand modeling to the population level, and (c) to implement both environmental and genetic/intrinsic sources of individual differences. The model succeeded in capturing the qualitative patterns of regularity effects in both population performance and the predictive power of SES that were observed in the empirical data. The model suggested that the empirical data are best captured by relatively wider variation in learning abilities and relatively narrow variation in (and good quality of) environmental information. There were shortcomings in the model's quantitative fit, which are discussed. The model made several novel predictions, with respect to the influence of SES on delay versus giftedness, the change of SES effects over development, and the influence of SES on children of different ability levels (gene-environment interactions). The first of these predictions was that SES should reliably predict gifted performance in children but not delayed performance, and the prediction was supported by the Bishop data set. Finally, the model demonstrated limits on the inferences that can be drawn about developmental mechanisms on the basis of data from individual differences. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Integrating content and language in English language teaching in secondary education: Models, benefits, and challenges

    OpenAIRE

    Banegas, Darío Luis

    2012-01-01

    In the last decade, there has been a major interest in content-based instruction (CBI) and content and language integrated learning (CLIL). These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. In this paper, I first offer a sociocultural view of CBI-CLIL. Secondly, I define language and content as vital components in CBI-CLIL. Thirdly, I revie...

  17. A novel storage method for near infrared spectroscopy chemometric models.

    Science.gov (United States)

    Zhang, Zhi-Min; Chen, Shan; Liang, Yi-Zeng

    2010-06-04

    Chemometric Modeling Markup Language (CMML) is developed by us for containing chemometrics models within one document through converting binary data into strings by base64 encode/decode algorithms to solve the interoperability issue in sharing chemometrics models. It provides a base functionality for storage of sampling, variable selection, pretreating, outlier and modeling parameters and data. With the help of base64 algorithm, the usability of CMML is in equilibrium with size by transforming the binary data into base64 encoded string. Due to the advantages of Extensible Markup Language (XML), models stored in CMML can be easily reused in various other software and programming languages as long as the programming language has XML parsing library. One can also use the XML Path Language (XPath) query language to select desired data from the CMML file effectively. The application of this language in near infrared spectroscopy model storage is implemented as a class in C++ language and available as open source software (http://code.google.com/p/cmml), and the implementations in other languages, such as MATLAB and R are in progress.

  18. A Grammar Analysis Model for the Unified Multimedia Query Language

    Institute of Scientific and Technical Information of China (English)

    Zhong-Sheng Cao; Zong-Da Wu; Yuan-Zhen Wang

    2008-01-01

    The unified multimedia query language(UMQL) is a powerful general-purpose multimediaquery language, and it is very suitable for multimediainformation retrieval. The paper proposes a grammaranalysis model to implement an effective grammaticalprocessing for the language. It separates the grammaranalysis of a UMQL query specification into two phases:syntactic analysis and semantic analysis, and thenrespectively uses Backus-Naur form (EBNF) and logicalalgebra to specify both restrictive grammar rules. As aresult, the model can present error guiding informationfor a query specification which owns incorrect grammar.The model not only suits well the processing of UMQLqueries, but also has a guiding significance for otherprojects concerning query processings of descriptivequery languages.

  19. Translation rescoring through recurrent neural network language models

    OpenAIRE

    PERIS ABRIL, ÁLVARO

    2014-01-01

    This work is framed into the Statistical Machine Translation field, more specifically into the language modeling challenge. In this area, have classically predominated the n-gram approach, but, in the latest years, different approaches have arisen in order to tackle this problem. One of this approaches is the use of artificial recurrent neural networks, which are supposed to outperform the n-gram language models. The aim of this work is to test empirically these new language...

  20. Modeling of Slovak Language for Broadcast News Transcription

    OpenAIRE

    Staš, Ján; JUHÁR Jozef

    2015-01-01

    The paper describes recent progress in the development the Slovak language models for transcription of spontaneous speech such as broadcast news, educational talks and lectures, or meetings. This work extends previous research oriented on the automatic transcription of dictated speech and brings some new extensions for improving perplexity and robustness of the Slovak language models trained on the web-based and electronic language resources for being more precise in recognition of spontaneou...

  1. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  2. Innovations in Language Learning: The Oregon Chinese Flagship Model

    Directory of Open Access Journals (Sweden)

    Carl Falsgraf

    2007-01-01

    Full Text Available Language learning in the United States suffers from a culture of low expectations. Lacking bilingual role models around them, students often view language class as, at best, a way to become a tourist in a country with a language different from their own. Monolingual policymakers assume that learning another language fluently is impossible and inconsequential, since they themselves are capable professionals with one language. Educators, discouraged by years of inadequate funding and support, have come to hope for nothing more than incremental improvements. The National Flagship Language Program (NFLP aims to break this cycle of low expectations and low results by providing funding to institutions willing to accept the challenge of producing Superior (Level 3 language users through a radical re-engineering of the language learning enterprise. The need for fundamental change in language education is longstanding, but the events of September 11 brought the importance of this need to the awareness of national policymakers. Due to the emphasis of critical languages, responsibility for carrying out this fundamental re-examination of language learning has fallen to those engaged in the less commonly taught languages. 1

  3. LTSmin: high-performance language-independent model checking

    NARCIS (Netherlands)

    Kant, Gijs; Laarman, Alfons; Meijer, Jeroen; Pol, van de Jaco; Blom, Stefan; Dijk, van Tom; Baier, Christel; Tinelli, Cesare

    2015-01-01

    In recent years, the LTSmin model checker has been extended with support for several new modelling languages, including probabilistic (Mapa) and timed systems (Uppaal). Also, connecting additional language front-ends or ad-hoc state-space generators to LTSmin was simplified using custom C-code. From

  4. Null Objects in Second Language Acquisition: Grammatical vs. Performance Models

    Science.gov (United States)

    Zyzik, Eve C.

    2008-01-01

    Null direct objects provide a favourable testing ground for grammatical and performance models of argument omission. This article examines both types of models in order to determine which gives a more plausible account of the second language data. The data were collected from second language (L2) learners of Spanish by means of four oral…

  5. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  6. Syllable language models for Mandarin speech recognition: exploiting character language models.

    Science.gov (United States)

    Liu, Xunying; Hieronymus, James L; Gales, Mark J F; Woodland, Philip C

    2013-01-01

    Mandarin Chinese is based on characters which are syllabic in nature and morphological in meaning. All spoken languages have syllabiotactic rules which govern the construction of syllables and their allowed sequences. These constraints are not as restrictive as those learned from word sequences, but they can provide additional useful linguistic information. Hence, it is possible to improve speech recognition performance by appropriately combining these two types of constraints. For the Chinese language considered in this paper, character level language models (LMs) can be used as a first level approximation to allowed syllable sequences. To test this idea, word and character level n-gram LMs were trained on 2.8 billion words (equivalent to 4.3 billion characters) of texts from a wide collection of text sources. Both hypothesis and model based combination techniques were investigated to combine word and character level LMs. Significant character error rate reductions up to 7.3% relative were obtained on a state-of-the-art Mandarin Chinese broadcast audio recognition task using an adapted history dependent multi-level LM that performs a log-linearly combination of character and word level LMs. This supports the hypothesis that character or syllable sequence models are useful for improving Mandarin speech recognition performance.

  7. Self-organizing map models of language acquisition

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  8. Self-organizing map models of language acquisition.

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-11-19

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories.

  9. An XML-Based Mission Command Language for Autonomous Underwater Vehicles (AUVs)

    Science.gov (United States)

    2003-06-01

    for creating other languages) that is used to create markup languages, such as Hypertext Markup Language (HTML).” (From Deitel , Deitel , Nieto, Lin...text, or any other text-based document. ( Deitel , Deitel , Neito, Lin, and Sadhu, 2001) One benefit of using XSLT for such a task is that XSLT has none...allows the user to select Execution Program: there are two different programs for the execution level (one in C and the other in Java ), so the users

  10. The Role of Learning Strategies in Second Language Acquisition: A Model for Research in Listening Comprehension

    Science.gov (United States)

    1987-06-01

    role of learning strategies in second language acquisition . While strategies used in acquiring productive language skills are discussed briefly, the...comprehensions. Keywords: Learning strategies, English as a second language, Second language acquisition , Basic skills, Research model.

  11. Declarative XML Update Language Based on a Higher Data Model

    Institute of Scientific and Technical Information of China (English)

    Guo-Ren Wang; Xiao-Lin Zhang

    2005-01-01

    With the extensive use of XML in applications over the Web, how to update XML data is becoming an important issue because the role of XML has expanded beyond traditional applications in which XML is used for information exchange and data representation over the Web. So far, several languages have been proposed for updating XML data, but they are all based on lower, so-called graph-based or tree-based data models. Update requests are thus expressed in a nonintuitive and unnatural way and update statements are too complicated to comprehend. This paper presents a novel declarative XML update language which is an extension of the XML-RL query language. Compared with other existing XML update languages, it has the following features. First, it is the only XML data manipulation language based on a higher data model. Second, this language can express complex update requests at multiple levels in a hierarchy in a simple and flat way. Third, this language directly supports the functionality of updating complex objects while all other update languages do not support these operations. Lastly, most of existing languages use rename to modify attribute and element names, which is a different way from updates on value. The proposed language modifies tag names, values, and objects in a unified way by the introduction of three kinds of logical binding variables: object variables, value variables, and name variables.

  12. Pitch modelling for the Nguni languages

    CSIR Research Space (South Africa)

    Govender, N

    2007-06-01

    Full Text Available linguistic and physical variables of a prosodic nature in this family of languages. Firstly we undertake a set of experiments to select an appropriate pitch tracking algorithm for the the Nguni family of languages. We then use this pitch tracking algorithm...

  13. Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).

    Science.gov (United States)

    Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco

    2013-10-01

    In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Mathematical model of various statements of C-type Language

    Directory of Open Access Journals (Sweden)

    Manoj Kumar Srivastav

    2013-12-01

    Full Text Available Some of the important components of high level languages are statements, keywords, variable declarations, arrays, user defined functions etc. In case of object oriented programming language we use class, object, inheritance, operator overloading, function overloading, polymorphism etc. There are some common category of statements such as control statement, loop statements etc. Pointers are also one important concept in C-language. User defined functions, function subprograms or subroutines are also important concepts in different programming languages. The language like ALGOL was developed using Chomsky context free grammar. The similar concept used in C-type languages. The high level languages are now based on mathematical derivations and logic. Most of the components of any high level language can be obtained from simple mathematical logic and derivations. In the present study the authors have tried to give some unified mathematical model of few statements, arrays, user defined functions of C-language. However, the present method may further be extended to any other high level language.

  15. Neutral evolution: A null model for language dynamics

    CERN Document Server

    Blythe, R A

    2011-01-01

    We review the task of aligning simple models for language dynamics with relevant empirical data, motivated by the fact that this is rarely attempted in practice despite an abundance of abstract models. We propose that one way to meet this challenge is through the careful construction of null models. We argue in particular that rejection of a null model must have important consequences for theories about language dynamics if modelling is truly to be worthwhile. Our main claim is that the stochastic process of neutral evolution (also known as genetic drift or random copying) is a viable null model for language dynamics. We survey empirical evidence in favour and against neutral evolution as a mechanism behind historical language changes, highlighting the theoretical implications in each case.

  16. Incorporating Linguistic Structure into Maximum Entropy Language Models

    Institute of Scientific and Technical Information of China (English)

    FANG GaoLin(方高林); GAO Wen(高文); WANG ZhaoQi(王兆其)

    2003-01-01

    In statistical language models, how to integrate diverse linguistic knowledge in a general framework for long-distance dependencies is a challenging issue. In this paper, an improved language model incorporating linguistic structure into maximum entropy framework is presented.The proposed model combines trigram with the structure knowledge of base phrase in which trigram is used to capture the local relation between words, while the structure knowledge of base phrase is considered to represent the long-distance relations between syntactical structures. The knowledge of syntax, semantics and vocabulary is integrated into the maximum entropy framework.Experimental results show that the proposed model improves by 24% for language model perplexity and increases about 3% for sign language recognition rate compared with the trigram model.

  17. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  18. A Model of Instruction for Integrating Culture and Language.

    Science.gov (United States)

    Papalia, Anthony

    An integrated model of instruction in language and culture uses a sequential method of discovering sensation, perception, concept, and principle to develop self-analysis skills in students. When planning activities for learning a language and developing cultural understanding, teachers might follow a sequence such as the following: introduce…

  19. A graphical Specification Language for Modeling Concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, Gerald H.; Pascoe, James; Welch, Peter; Loader, Roger; Sunderam, Vaidy

    2002-01-01

    Introduced in this paper is a new graphical modeling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects, as well as along with CSP algebraic ex

  20. A graphical Specification Language for Modeling Concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, G.H.; Pascoe, James; Welch, Peter; Loader, Roger; Sunderam, Vaidy

    2002-01-01

    Introduced in this paper is a new graphical modeling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects, as well as along with CSP algebraic

  1. Graphical modelling language for spycifying concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, G.H.

    2003-01-01

    Introduced in this (shortened) paper is a graphical modelling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects of concurrent software

  2. Guiding Principles for Language Assessment Reform: A Model for Collaboration

    Science.gov (United States)

    Green, Brent A.; Andrade, Maureen Snow

    2010-01-01

    Traditionally, practitioners interested in language test reform have focused on the qualities within an examination which result in either positive or negative impacts on participants, institutions, and society. Recent views suggest a multifaceted interaction among factors affecting language test reform. We introduce a model for test reform that…

  3. Leveraging Small-Lexicon Language Models

    Science.gov (United States)

    2016-12-31

    and predictive capacity for) variation between related languages. Our deliverable is the finished product : normalized lexicons and marked cognate...Total vowels ”). It does not provide lexical items or transcribed phonological data. Table 4 Limited language-family coverage of currently...have provided. One additional character – / ʋ / – is used as the high, back, rounded, fricated vowel . It appears variously in the literature as /v

  4. A Dynamical Systems Model for Language Change.

    Science.gov (United States)

    1995-03-01

    11, Georgetown Universtiy, 1982. [6] A. S. Kroch. Function and gramar in the his- tory of english : Periphrastic "do.". In Ralph Fa- sold, editor...cally, posit 3 Boolean parameters, Speci er rst/ nal; Head rst/ nal; Verb second allowed or not, leading to 8 possible gram- mars/languages ( English ...Nonverb second populations tend to gain Verb second over time (e.g., English -type languages change to a more German type) contrary to historically

  5. Fence - An Efficient Parser with Ambiguity Support for Model-Driven Language Specification

    CERN Document Server

    Quesada, Luis; Cortijo, Francisco J

    2011-01-01

    Model-based language specification has applications in the implementation of language processors, the design of domain-specific languages, model-driven software development, data integration, text mining, natural language processing, and corpus-based induction of models. Model-based language specification decouples language design from language processing and, unlike traditional grammar-driven approaches, which constrain language designers to specific kinds of grammars, it needs general parser generators able to deal with ambiguities. In this paper, we propose Fence, an efficient bottom-up parsing algorithm with lexical and syntactic ambiguity support that enables the use of model-based language specification in practice.

  6. Intended and unintended consequences of China's zero markup drug policy.

    Science.gov (United States)

    Yi, Hongmei; Miller, Grant; Zhang, Linxiu; Li, Shaoping; Rozelle, Scott

    2015-08-01

    Since economic liberalization in the late 1970s, China's health care providers have grown heavily reliant on revenue from drugs, which they both prescribe and sell. To curb abuse and to promote the availability, safety, and appropriate use of essential drugs, China introduced its national essential drug list in 2009 and implemented a zero markup policy designed to decouple provider compensation from drug prescription and sales. We collected and analyzed representative data from China's township health centers and their catchment-area populations both before and after the reform. We found large reductions in drug revenue, as intended by policy makers. However, we also found a doubling of inpatient care that appeared to be driven by supply, instead of demand. Thus, the reform had an important unintended consequence: China's health care providers have sought new, potentially inappropriate, forms of revenue. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  8. Language acquisition is model-based rather than model-free.

    Science.gov (United States)

    Wang, Felix Hao; Mintz, Toben H

    2016-01-01

    Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.

  9. ODQ: A Fluid Office Document Query Language

    Directory of Open Access Journals (Sweden)

    Xuhong Liu

    2015-06-01

    Full Text Available Fluid office documents, as semi-structured data often represented by Extensible Markup Language (XML are important parts of Big Data. These office documents have different formats, and their matching Application Programming Interfaces (APIs depend on developing platform and versions, which causes difficulty in custom development and information retrieval from them. To solve this problem, we have been developing an office document query (ODQ language which provides a uniform method to retrieve content from documents with different formats and versions. ODQ builds common document model ontology to conceal the format details of documents and provides a uniform operation interface to handle office documents with different formats. The results show that ODQ has advantages in format independence, and can facilitate users in developing documents processing systems with good interoperability.

  10. How does language model size effects speech recognition accuracy for the Turkish language?

    Directory of Open Access Journals (Sweden)

    Behnam ASEFİSARAY

    2016-05-01

    Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.

  11. VMTL: a language for end-user model transformation

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2016-01-01

    these guidelines. VMTL draws on our previous work on the usability-oriented Visual Model Query Language. We implement VMTL using the Henshin model transformation engine, and empirically investigate its learnability via two user experiments and a think-aloud protocol analysis. Our experiments, although conducted...... on computer science students exhibiting only some of the characteristics of end-user modelers, show that VMTL compares favorably in terms of learnability with two state-of the-art model transformation languages: Epsilon and Henshin. Our think-aloud protocol analysis confirms many of the design decisions......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently...

  12. Data on the interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-09-01

    The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  13. Data on the interexaminer variation of minutia markup on latent fingerprints

    Directory of Open Access Journals (Sweden)

    Bradford T. Ulery

    2016-09-01

    Full Text Available The data in this article supports the research paper entitled “Interexaminer variation of minutia markup on latent fingerprints” [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the “White Box Latent Print Examiner Study,” in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  14. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  15. Language Model Applications to Spelling with Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Anderson Mora-Cortes

    2014-03-01

    Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  16. Cognitive aging and hearing acuity: modeling spoken language comprehension.

    Science.gov (United States)

    Wingfield, Arthur; Amichetti, Nicole M; Lash, Amanda

    2015-01-01

    The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  17. Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis

    Institute of Scientific and Technical Information of China (English)

    Bo Chen; Hui He; Jun Guo

    2008-01-01

    Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.

  18. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into

  19. Imitation, sign language skill and the Developmental Ease of Language Understanding (D-ELU model

    Directory of Open Access Journals (Sweden)

    Emil eHolmer

    2016-02-01

    Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills

  20. A Grammar Analysis Model for the Unified Multimedia Query Language

    Institute of Scientific and Technical Information of China (English)

    Zhong-Sheng Cao; Zong-Da Wu; Yuan-Zhen Wang

    2008-01-01

    The unified multimedia query language (UMQL) is a powerful general-purpose multimedia query language, and it is very suitable for multimedia information retrieval. The paper proposes a grammar analysis model to implement an effective grammatical processing for the language. It separates the grammar analysis of a UMQL query specification into two phases: syntactic analysis and semantic analysis, and then respectively uses Backus-Naur form (EBNF) and logical algebra to specify both restrictive grammar rules. As a result, the model can present error guiding information for a query specification which owns incorrect grammar. The model not only suits well the processing of UMQL queries, but also has a guiding significance for other projects concerning query processings of descriptive query languages.

  1. Methods & Strategies: A Model of Shared Language

    Science.gov (United States)

    Baird, Kate; Coy, Stephanie; Pocock, Aija

    2015-01-01

    The authors' rural community experienced an explosion of young learners moving into their schools who did not have English as their primary language. To help their teachers meet these challenges, they began to partner with a program that provides grant-funded support for migrant learners (see Internet Resources) to find ways to address these…

  2. Methods & Strategies: A Model of Shared Language

    Science.gov (United States)

    Baird, Kate; Coy, Stephanie; Pocock, Aija

    2015-01-01

    The authors' rural community experienced an explosion of young learners moving into their schools who did not have English as their primary language. To help their teachers meet these challenges, they began to partner with a program that provides grant-funded support for migrant learners (see Internet Resources) to find ways to address these…

  3. Formulating "Principles of Procedure" for the Foreign Language Classroom: A Framework for Process Model Language Curricula

    Science.gov (United States)

    Villacañas de Castro, Luis S.

    2016-01-01

    This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…

  4. Visualisation of Domain-Specific Modelling Languages Using UML

    NARCIS (Netherlands)

    Graaf, B.; Van Deursen, A.

    2006-01-01

    Currently, general-purpose modelling tools are often only used to draw diagrams for the documentation. The introduction of model-driven software development approaches involves the definition of domain-specific modelling languages that allow code generation. Although graphical representations of the

  5. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    Science.gov (United States)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  6. Modeling the language learning strategies and English language proficiency of pre-university students in UMS: A case study

    Science.gov (United States)

    Kiram, J. J.; Sulaiman, J.; Swanto, S.; Din, W. A.

    2015-10-01

    This study aims to construct a mathematical model of the relationship between a student's Language Learning Strategy usage and English Language proficiency. Fifty-six pre-university students of University Malaysia Sabah participated in this study. A self-report questionnaire called the Strategy Inventory for Language Learning was administered to them to measure their language learning strategy preferences before they sat for the Malaysian University English Test (MUET), the results of which were utilised to measure their English language proficiency. We attempted the model assessment specific to Multiple Linear Regression Analysis subject to variable selection using Stepwise regression. We conducted various assessments to the model obtained, including the Global F-test, Root Mean Square Error and R-squared. The model obtained suggests that not all language learning strategies should be included in the model in an attempt to predict Language Proficiency.

  7. A database approach to information retrieval: The remarkable relationship between language models and region models

    CERN Document Server

    Hiemstra, Djoerd

    2010-01-01

    In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.

  8. A Model and Questionnaire of Language Identity in Iran: A Structural Equation Modelling Approach

    Science.gov (United States)

    Khatib, Mohammad; Rezaei, Saeed

    2013-01-01

    This study consisted of three main phases including the development of a hypothesised model of language identity in Iran, developing and validating a questionnaire based on this model and finally testing the model based on the questionnaire data. In the first phase of this research, a hypothesised model of language identity in Iran was developed…

  9. Cross Language Information Retrieval Model for Discovering WSDL Documents Using Arabic Language Query

    Directory of Open Access Journals (Sweden)

    Prof. Dr. Torkey I.Sultan

    2013-09-01

    Full Text Available Web service discovery is the process of finding a suitable Web service for a given user’s query through analyzing the web service‘s WSDL content and finding the best match for the user’s query. The service query should be written in the same language of the WSDL, for example English. Cross Language Information Retrieval techniques does not exist in the web service discovery process. The absence of CLIR methods limits the search language to the English language keywords only, which raises the following question “How do people that do not know the English Language find a web service, This paper proposes the application of CLIR techniques and IR methods to support Bilingual Web service discovery process the second language that proposed here is Arabic. Text mining techniques were applied on WSDL content and user’s query to be ready for CLIR methods. The proposed model was tested on a curated catalogue of Life Science Web Services http://www.biocatalogue.org/ and used for solving the research problem with 99.87 % accuracy and 95.06 precision

  10. Cognitive aging and hearing acuity: Modeling spoken language comprehension

    Directory of Open Access Journals (Sweden)

    Arthur eWingfield

    2015-06-01

    Full Text Available The comprehension of spoken language has been characterized by a number of local theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The Ease of Language Understanding (ELU model (Rönnberg et al., 2013 stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we examine aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout our discussion our goal is to offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.

  11. Integrating content and language in English language teaching in secondary education: Models, benefits, and challenges

    Directory of Open Access Journals (Sweden)

    Darío Luis Banegas

    2012-10-01

    Full Text Available In the last decade, there has been a major interest in content-based instruction (CBI and content and language integrated learning (CLIL. These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. In this paper, I first offer a sociocultural view of CBI-CLIL. Secondly, I define language and content as vital components in CBI-CLIL. Thirdly, I review the origins of CBI and the continuum perspective, and CLIL definitions and models featured in the literature. Fourth, I summarise current aspects around research in programme evaluation. Last, I review the benefits and challenges of this innovative approach so as to encourage critically context-responsive endeavours.

  12. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model

    Science.gov (United States)

    Goodrich, J. Marc; Lonigan, Christopher J.

    2017-01-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…

  13. The Effect of Dual-Language and Transitional-Bilingual Education Instructional Models on Spanish Proficiency for English Language Learners

    Science.gov (United States)

    Murphy, Audrey Figueroa

    2014-01-01

    The effects of "transitional-bilingual" and "dual-language" educational models on proficiency in students' home language (Spanish) were examined in a study of English language learners in the first and second grades in a large urban elementary school. In each grade, students were taught with either a transitional-bilingual…

  14. Modelling the Perceived Value of Compulsory English Language Education in Undergraduate Non-Language Majors of Japanese Nationality

    Science.gov (United States)

    Rivers, Damian J.

    2012-01-01

    Adopting mixed methods of data collection and analysis, the current study models the "perceived value of compulsory English language education" in a sample of 138 undergraduate non-language majors of Japanese nationality at a national university in Japan. During the orientation period of a compulsory 15-week English language programme,…

  15. Language modeling for automatic speech recognition of inflective languages an applications-oriented approach using lexical data

    CERN Document Server

    Donaj, Gregor

    2017-01-01

    This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...

  16. Language

    DEFF Research Database (Denmark)

    Sanden, Guro Refsum

    2016-01-01

    Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...... of a company. Language policies and/or strategies can be used to regulate a company’s internal modes of communication. Language management tools can be deployed to address existing and expected language needs. Continuous feedback from the front line ensures strategic learning and reduces the risk of suboptimal...

  17. Research on the Acculturation Model for Second Language Acquisition.

    Science.gov (United States)

    Schumann, John H.

    1986-01-01

    Presents a model of second language acquisition based on the social-psychology of acculturation, including factors in social, affective, personality, cognitive, biological, aptitude, personal, input, and instructional areas. Studies which test this model are reviewed and evaluated. (Author/CB)

  18. Industrial application of formal models generated from domain specific languages

    NARCIS (Netherlands)

    Hooman, J.

    2016-01-01

    Domain Specific Languages (DSLs) provide a lightweight approach to incorporate formal techniques into the industrial workflow. From DSL instances, formal models and other artefacts can be generated, such as simulation models and code. Having a single source for all artefacts improves maintenance and

  19. Integrating Articulatory Constraints into Models of Second Language Phonological Acquisition

    Science.gov (United States)

    Colantoni, Laura; Steele, Jeffrey

    2008-01-01

    Models such as Eckman's markedness differential hypothesis, Flege's speech learning model, and Brown's feature-based theory of perception seek to explain and predict the relative difficulty second language (L2) learners face when acquiring new or similar sounds. In this paper, we test their predictive adequacy as concerns native English speakers'…

  20. F-Alloy: An Alloy Based Model Transformation Language

    OpenAIRE

    Gammaitoni, Loïc; Kelsen, Pierre

    2015-01-01

    Model transformations are one of the core artifacts of a model-driven engineering approach. The relational logic language Alloy has been used in the past to verify properties of model transformations. In this paper we introduce the concept of functional Alloy modules. In essence a functional Alloy module can be viewed as an Alloy module representing a model transformation. We describe a sublanguage of Alloy called F-Alloy that allows the specification of functional Alloy modules. Module...

  1. Building CMU Sphinx language model for the Ho

    Directory of Open Access Journals (Sweden)

    Mohamed Yassine El Amrani

    2016-11-01

    Full Text Available This paper investigates the use of a simplified set of Arabic phonemes in an Arabic Speech Recognition system applied to Holy Quran. The CMU Sphinx 4 was used to train and evaluate a language model for the Hafs narration of the Holy Quran. The building of the language model was done using a simplified list of Arabic phonemes instead of the mainly used Romanized set in order to simplify the process of generating the language model. The experiments resulted in very low Word Error Rate (WER reaching 1.5% while using a very small set of audio files during the training phase when using all the audio data for both the training and the testing phases. However, when using 90% and 80% of the training data, the WER obtained was respectively 50.0% and 55.7%.

  2. Spin Glass Models of Syntax and Language Evolution

    CERN Document Server

    Siva, Karthik; Marcolli, Matilde

    2015-01-01

    Using the SSWL database of syntactic parameters of world languages, and the MIT Media Lab data on language interactions, we construct a spin glass model of language evolution. We treat binary syntactic parameters as spin states, with languages as vertices of a graph, and assigned interaction energies along the edges. We study a rough model of syntax evolution, under the assumption that a strong interaction energy tends to cause parameters to align, as in the case of ferromagnetic materials. We also study how the spin glass model needs to be modified to account for entailment relations between syntactic parameters. This modification leads naturally to a generalization of Potts models with external magnetic field, which consists of a coupling at the vertices of an Ising model and a Potts model with q=3, that have the same edge interactions. We describe the results of simulations of the dynamics of these models, in different temperature and energy regimes. We discuss the linguistic interpretation of the paramete...

  3. Proposed Bilingual Model for Right to Left Language Applications

    Directory of Open Access Journals (Sweden)

    Farhan M Al Obisat

    2016-09-01

    Full Text Available Using right to left languages (RLL in software programming requires switching the direction of many components in the interface. Preserving the original interface layout and only changing the language may result in different semantics or interpretations of the content. However, this aspect is often dismissing in the field. This research, therefore, proposes a Bilingual Model (BL to check and correct the directions in social media applications. Moreover, test-driven development (TDD For RLL, such as Arabic, is considered in the testing methodologies. Similarly, the bilingual analysis has to follow both the TDD and BL models.

  4. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  5. Contemporary model of language organization: an overview for neurosurgeons.

    Science.gov (United States)

    Chang, Edward F; Raygor, Kunal P; Berger, Mitchel S

    2015-02-01

    Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in "dorsal" and "ventral" pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.

  6. A computational language approach to modeling prose recall in schizophrenia.

    Science.gov (United States)

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  7. Lexical access in sign language: A computational model

    Directory of Open Access Journals (Sweden)

    Naomi Kenney Caselli

    2014-05-01

    Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  8. Lexical access in sign language: a computational model.

    Science.gov (United States)

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  9. Modelling the coevolution of joint attention and language.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan

    2012-11-22

    Joint attention (JA) is important to many social, communicative activities, including language, and humans exhibit a considerably high level of JA compared with non-human primates. We propose a coevolutionary hypothesis to explain this degree-difference in JA: once JA started to aid linguistic comprehension, along with language evolution, communicative success (CS) during cultural transmission could enhance the levels of JA among language users. We illustrate this hypothesis via a multi-agent computational model, where JA boils down to a genetically transmitted ability to obtain non-linguistic cues aiding comprehension. The simulation results and statistical analysis show that: (i) the level of JA is correlated with the understandability of the emergent language; and (ii) CS can boost an initially low level of JA and 'ratchet' it up to a stable high level. This coevolutionary perspective helps explain the degree-difference in many language-related competences between humans and non-human primates, and reflects the importance of biological evolution, individual learning and cultural transmission to language evolution.

  10. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  11. Specifying Usage Control ModelWith Object Constraint Language

    Directory of Open Access Journals (Sweden)

    Min Li

    2013-02-01

    Full Text Available The recent usage control model (UCON is a foundation for next-generation access control models with distinguishing properties of decision continuity and attribute mutability. Constraints in UCON are one of the most important components that have involved in the principle motivations of usage analysis and design. The importance of constraints associated with authorizations, obligations, and conditions in UCON has been recognized but modeling these constraints has not been received much attention. In this paper we use a de facto constraints specification language in software engineering to analyze the constraints in UCON model. We show how to represent constraints with object constraint language (OCL and give out a formalized specification of UCON model which is built from basic constraints, such as authorization predicates, obligation actions and condition requirements. Further, we show the flexibility and expressive capability of this specified UCON model with extensive examples.

  12. Introducing the Collaborative Learning Modeling Language (ColeML)

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe

    2014-01-01

    with a few basic concepts, 2) the language should make possible a visual graphic representation of the model, 3) elements of the model should be able to change status during the articulation, 4) the system should accept unfinished models, 5) models should be able to be built by integrating other models......, and differentiating teaching. Technology can help respond to these challenges (Brush & Saye, 2008; Bundsgaard, 2009, 2010; Ge, Planas, & Er, 2010; Helic, Krottmaier, Maurer, & Scerbakov, 2005; Daniel Schneider & Synteta, 2005; D. Schneider, Synteta, & Frété, 2002), but platforms are very expensive to build from...... the ground up. If these platforms are to find their way into everyday teaching and learning, they have to be easy and cheap to develop. Thus there is a need for easy to use application programming platforms. This paper argues that a visual modeling programming language would be an important part...

  13. Computer-Aided Transformation of PDE Models: Languages, Representations, and a Calculus of Operations

    Science.gov (United States)

    2016-01-05

    Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems

  14. Phase Transition in a Sexual Age-Structured Model of Learning Foreign Languages

    Science.gov (United States)

    Schwämmle, V.

    The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition of the ratio between the number of speakers of different languages with the mutation rate as control parameter.

  15. A model of competition among more than two languages

    CERN Document Server

    Fujie, Ryo; Masuda, Naoki

    2012-01-01

    We extend the Abrams-Strogatz model for competition between two languages [Nature 424, 900 (2003)] to the case of n(>=2) competing states (i.e., languages). Although the Abrams-Strogatz model for n=2 can be interpreted as modeling either majority preference or minority aversion, the two mechanisms are distinct when n>=3. We find that the condition for the coexistence of different states is independent of n under the pure majority preference, whereas it depends on n under the pure minority aversion. We also show that the stable coexistence equilibrium and stable monopoly equilibria can be multistable under the minority aversion and not under the majority preference. Furthermore, we obtain the phase diagram of the model when the effects of the majority preference and minority aversion are mixed, under the condition that different states have the same attractiveness. We show that the multistability is a generic property of the model facilitated by large n.

  16. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  17. Optlang: An algebraic modeling language for mathematical optimization

    DEFF Research Database (Denmark)

    Jensen, Kristian; Cardoso, Joao; Sonnenschein, Nikolaus

    2016-01-01

    Optlang is a Python package implementing a modeling language for solving mathematical optimization problems, i.e., maximizing or minimizing an objective function over a set of variables subject to a number of constraints. It provides a common native Python interface to a series of optimization...

  18. Using Different Models of Second-Language Teaching

    Science.gov (United States)

    Papalia, Anthony

    1975-01-01

    The teaching profession is encouraged to retreat from the search for one method of instruction and to recognize that there are many factors which lead to success in modern language learning. Several models offering different approaches for organizing classroom instruction are suggested. (Author/PMP)

  19. Structural Equation Modeling Reporting Practices for Language Assessment

    Science.gov (United States)

    Ockey, Gary J.; Choi, Ikkyu

    2015-01-01

    Studies that use structural equation modeling (SEM) techniques are increasingly encountered in the language assessment literature. This popularity has created the need for a set of guidelines that can indicate what should be included in a research report and make it possible for research consumers to judge the appropriateness of the…

  20. Bayesian molecular design with a chemical language model.

    Science.gov (United States)

    Ikebata, Hisaki; Hongo, Kenta; Isomura, Tetsu; Maezono, Ryo; Yoshida, Ryo

    2017-03-09

    The aim of computational molecular design is the identification of promising hypothetical molecules with a predefined set of desired properties. We address the issue of accelerating the material discovery with state-of-the-art machine learning techniques. The method involves two different types of prediction; the forward and backward predictions. The objective of the forward prediction is to create a set of machine learning models on various properties of a given molecule. Inverting the trained forward models through Bayes' law, we derive a posterior distribution for the backward prediction, which is conditioned by a desired property requirement. Exploring high-probability regions of the posterior with a sequential Monte Carlo technique, molecules that exhibit the desired properties can computationally be created. One major difficulty in the computational creation of molecules is the exclusion of the occurrence of chemically unfavorable structures. To circumvent this issue, we derive a chemical language model that acquires commonly occurring patterns of chemical fragments through natural language processing of ASCII strings of existing compounds, which follow the SMILES chemical language notation. In the backward prediction, the trained language model is used to refine chemical strings such that the properties of the resulting structures fall within the desired property region while chemically unfavorable structures are successfully removed. The present method is demonstrated through the design of small organic molecules with the property requirements on HOMO-LUMO gap and internal energy. The R package iqspr is available at the CRAN repository.

  1. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  2. How different are language models and word clouds?

    NARCIS (Netherlands)

    Kaptein, R.; Hiemstra, D.; Kamps, J.

    2010-01-01

    Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences

  3. How Different are Language Models and Word Clouds?

    NARCIS (Netherlands)

    Kaptein, Rianne; Hiemstra, Djoerd; Kamps, Jaap

    Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences

  4. How different are language models and word clouds?

    NARCIS (Netherlands)

    Kaptein, R.; Hiemstra, D.; Kamps, J.

    2010-01-01

    Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences betwee

  5. How Different are Language Models and Word Clouds?

    NARCIS (Netherlands)

    Kaptein, Rianne; Hiemstra, Djoerd; Kamps, Jaap

    2010-01-01

    Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences betwee

  6. Language modeling for what-with-where on GOOG-411

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-09-01

    Full Text Available This paper describes the language modelling (LM) architectures and recognition experiments that enabled support of 'what-with-where' queries on GOOG-411. First the paper compares accuracy trade-offs between a single national business LM for business...

  7. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    2006-01-01

    Much recent research work discusses the transformation between different process modelling languages. This work, however, is mainly focussed on specific process modelling languages, and thus the general reusability of the applied transformation concepts is rather limited. In this paper, we aim...... to abstract from concrete transformation strategies by distinguishing two major paradigms for representing control flow in process modelling languages: block-oriented languages (such as BPEL and BPML) and graph-oriented languages (such as EPCs and YAWL). The contribution of this paper are generic strategies...... for transforming from block-oriented process languages to graph-oriented languages, and vice versa....

  8. Paired structures in logical and semiotic models of natural language

    DEFF Research Database (Denmark)

    Rodríguez, J. Tinguaro; Franco, Camilo; Montero, Javier

    2014-01-01

    The evidence coming from cognitive psychology and linguistics shows that pairs of reference concepts (as e.g. good/bad, tall/short, nice/ugly, etc.) play a crucial role in the way we everyday use and understand natural languages in order to analyze reality and make decisions. Different situations...... languages through logical models usually assumes that reference concepts are just each other complement. In this paper, we informally discuss more deeply about these issues, claiming in a positional manner that an adequate logical study and representation of the features and complexity of natural languages...... relationships holding between the pair of reference concepts from which the valuation structure emerges. Different relationships may enable the representation of different types of neutrality, understood here as an epistemic hesitation regarding the references. However, the standard approach to natural...

  9. Modeling Educational Content: The Cognitive Approach of the PALO Language

    Directory of Open Access Journals (Sweden)

    M. Felisa Verdejo Maíllo

    2004-01-01

    Full Text Available This paper presents a reference framework to describe educational material. It introduces the PALO Language as a cognitive based approach to Educational Modeling Languages (EML. In accordance with recent trends for reusability and interoperability in Learning Technologies, EML constitutes an evolution of the current content-centered specifications of learning material, involving the description of learning processes and methods from a pedagogical and instructional perspective. The PALO Language, thus, provides a layer of abstraction for the description of learning material, including the description of learning activities, structure and scheduling. The framework makes use of domain and pedagogical ontologies as a reusable and maintainable way to represent and store instructional content, and to provide a pedagogical level of abstraction in the authoring process.

  10. A temporal model for Clinical Data Analytics language.

    Science.gov (United States)

    Safari, Leila; Patrick, Jon D

    2013-01-01

    The proposal of a special purpose language for Clinical Data Analytics (CliniDAL) is presented along with a general model for expressing temporal events in the language. The temporal dimension of clinical data needs to be addressed from at least five different points of view. Firstly, how to attach the knowledge of time based constraints to queries; secondly, how to mine temporal data in different CISs with various data models; thirdly, how to deal with both relative time and absolute time in the query language; fourthly, how to tackle internal time-event dependencies in queries, and finally, how to manage historical time events preserved in the patient's narrative. The temporal elements of the language are defined in Bachus Naur Form (BNF) along with a UML schema. Its use in a designed taxonomy of a five class hierarchy of data analytics tasks shows the solution to problems of time event dependencies in a highly complex cascade of queries needed to evaluate scientific experiments. The issues in using the model in a practical way are discussed as well.

  11. A Formal Semantic Model for the Access Specification Language RASP

    Directory of Open Access Journals (Sweden)

    Mark Evered

    2015-05-01

    Full Text Available The access specification language RASP extends traditional role-based access control (RBAC concepts to provide greater expressive power often required for fine-grained access control in sensitive information systems. Existing formal models of RBAC are not sufficient to describe these extensions. In this paper, we define a new model for RBAC which formalizes the RASP concepts of controlled role appointment and transitions, object attributes analogous to subject roles and a transitive role/attribute derivation relationship.

  12. An Extended Clustering Algorithm for Statistical Language Models

    CERN Document Server

    Ueberla, J P

    1994-01-01

    Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with exi...

  13. LANGUAGE

    Institute of Scientific and Technical Information of China (English)

    朱妤

    2009-01-01

    @@ The word"language"comes from the Latin(拉丁语)word"lingua",which means"tongue".The tongue is used in more sound combinations(结合)than any other organ(器官)of speech.A broader(概括性的)interpretation(解释)of"language"is that it is any form of expression.This includes(包括)writing,sign(手势)language,dance,music,painting,and mathematics.But the basic(基本的)form of language is speech.

  14. Modeling of Future Initial Teacher of Foreign Language Training, Using Situation Analysis

    Directory of Open Access Journals (Sweden)

    Maryana М. Sidun

    2012-12-01

    Full Text Available The article discloses the content of modeling of future initial teacher of foreign language, using situation analysis, defines the stages of modeling during the professional competence formation of future teacher of foreign language: preparatory, analytical and executive.

  15. HTEL: a HyperText Expression Language

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jørgen

    1999-01-01

    been submitted.A special tool has been used to build the HTEL-interpreter, as an example belonging a family of interpreters for domain specific languages. Members of that family have characteristics that are closely related to structural patterns found in the mark-ups of HTML. HTEL should also be seen...

  16. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    Much recent research work discusses the transformation between differentprocess modelling languages. This work, however, is mainly focussed on specific processmodelling languages, and thus the general reusability of the applied transformationconcepts is rather limited. In this paper, we aim...... to abstract from concrete transformationstrategies by distinguishing two major paradigms for process modelling languages:block-oriented languages (such as BPEL and BPML) and graph-oriented languages(such as EPCs and YAWL). The contribution of this paper are generic strategiesfor transforming from block......-oriented process languages to graph-oriented languages,and vice versa. We also present two case studies of applying our strategies....

  17. Knowledge Structure Measures of Reader's Situation Models across Languages: Translation Engenders Richer Structure

    Science.gov (United States)

    Kim, Kyung; Clariana, Roy B.

    2015-01-01

    In order to further validate and extend the application of recent knowledge structure (KS) measures to second language settings, this investigation explores how second language (L2, English) situation models are influenced by first language (L1, Korean) translation tasks. Fifty Korean low proficient English language learners were asked to read an…

  18. Free Trade Agreements and Firm-Product Markups in Chilean Manufacturing

    DEFF Research Database (Denmark)

    Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick

    In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US...... at the firm-product level. On average, adjustment on the profit margin does not appear to play a role. However, for more differentiated products, we find some evidence of an increase in markups, suggesting that firms do not fully pass-through increases in productivity on prices whenever they have enough...

  19. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  20. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices

    Institute of Scientific and Technical Information of China (English)

    WU GenQing (吴根清); ZHENG Fang (郑方)

    2003-01-01

    In this paper, an important question, whether a small language model can be practically accurate enough, is raised. Afterwards, the purpose of a language model, the problems that a language model faces, and the factors that affect the performance of a language model,are analyzed. Finally, a novel method for language model compression is proposed, which makes the large language model usable for applications in handheld devices, such as mobiles, smart phones, personal digital assistants (PDAs), and handheld personal computers (HPCs). In the proposed language model compression method, three aspects are included. First, the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed. Second, a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full language model. And third, a rank-based quantization method is adopted to quantize the bi-gram probability values. Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease. This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough, and it makes the language model usable in handheld devices.

  1. A language for easy and efficient modeling of Turing machines

    Institute of Scientific and Technical Information of China (English)

    Pinaki Chakraborty

    2007-01-01

    A Turing Machine Description Language (TMDL) is developed for easy and efficient modeling of Turing machines.TMDL supports formal symbolic representation of Turing machines. The grammar for the language is also provided. Then a fast singlepass compiler is developed for TMDL. The scope of code optimization in the compiler is examined. An interpreter is used to simulate the exact behavior of the compiled Turing machines. A dynamically allocated and resizable array is used to simulate the infinite tape of a Turing machine. The procedure for simulating composite Turing machines is also explained. In this paper, two sample Turing machines have been designed in TMDL and their simulations are discussed. The TMDL can be extended to model the different variations of the standard Turing machine.

  2. Learning to attend: a connectionist model of situated language comprehension.

    Science.gov (United States)

    Mayberry, Marshall R; Crocker, Matthew W; Knoeferle, Pia

    2009-05-01

    Evidence from numerous studies using the visual world paradigm has revealed both that spoken language can rapidly guide attention in a related visual scene and that scene information can immediately influence comprehension processes. These findings motivated the coordinated interplay account (Knoeferle & Crocker, 2006) of situated comprehension, which claims that utterance-mediated attention crucially underlies this closely coordinated interaction of language and scene processing. We present a recurrent sigma-pi neural network that models the rapid use of scene information, exploiting an utterance-mediated attentional mechanism that directly instantiates the CIA. The model is shown to achieve high levels of performance (both with and without scene contexts), while also exhibiting hallmark behaviors of situated comprehension, such as incremental processing, anticipation of appropriate role fillers, as well as the immediate use, and priority, of depicted event information through the coordinated use of utterance-mediated attention to the scene.

  3. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  4. The Radio Language Arts Project: adapting the radio mathematics model.

    Science.gov (United States)

    Christensen, P R

    1985-01-01

    Kenya's Radio Language Arts Project, directed by the Academy for Educational Development in cooperation with the Kenya Institute of Education in 1980-85, sought to teach English to rural school children in grades 1-3 through use of an intensive, radio-based instructional system. Daily 1/2 hour lessons are broadcast throughout the school year and supported by teachers and print materials. The project further was aimed at testing the feasibility of adaptation of the successful Nicaraguan Radio Math Project to a new subject area. Difficulties were encountered in articulating a language curriculum with the precision required for a media-based instructional system. Also a challenge was defining the acceptable regional standard for pronunciation and grammar; British English was finally selected. An important modification of the Radio Math model concerned the role of the teacher. While Radio Math sought to reduce the teacher's responsibilities during the broadcast, Radio Language Arts teachers played an important instructional role during the English lesson broadcasts by providing translation and checks on work. Evaluations of the Radio language Arts Project suggest significant gains in speaking, listening, and reading skills as well as high levels of satisfaction on the part of parents and teachers.

  5. Towards an Improved Performance Measure for Language Models

    CERN Document Server

    Ueberla, J P

    1997-01-01

    In this paper a first attempt at deriving an improved performance measure for language models, the probability ratio measure (PRM) is described. In a proof of concept experiment, it is shown that PRM correlates better with recognition accuracy and can lead to better recognition results when used as the optimisation criterion of a clustering algorithm. Inspite of the approximations and limitations of this preliminary work, the results are very encouraging and should justify more work along the same lines.

  6. A Time-Aware Language Model for Microblog Retrieval

    Science.gov (United States)

    2012-11-01

    A Time-Aware Language Model for Microblog Retrieval Bingjie Wei, Shuai Zhang, Rui Li, Bin Wang Institute of Computing Technology, Chinese Academy...of Sciences Beijing, China , 100190 Email: weibingjie, zhangshuai01,lirui, wangbin@ict.ac.cn Abstract This paper describes our work (the... social media, has become immensely popular in recent years. There exists many Microblog websites, such as twitter. Compared the twitter queries and

  7. Rosen's (M,R) system in Unified Modelling Language.

    Science.gov (United States)

    Zhang, Ling; Williams, Richard A; Gatherer, Derek

    2016-01-01

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly non-computable on a Turing machine. If (M,R) is truly non-computable, there are serious implications for the modelling of large biological networks in computer software. A body of work has now accumulated addressing Rosen's claim concerning (M,R) by attempting to instantiate it in various software systems. However, a conclusive refutation has remained elusive, principally since none of the attempts to date have unambiguously avoided the critique that they have altered the properties of (M,R) in the coding process, producing merely approximate simulations of (M,R) rather than true computational models. In this paper, we use the Unified Modelling Language (UML), a diagrammatic notation standard, to express (M,R) as a system of objects having attributes, functions and relations. We believe that this instantiates (M,R) in such a way than none of the original properties of the system are corrupted in the process. Crucially, we demonstrate that (M,R) as classically represented in the relational biology literature is implicitly a UML communication diagram. Furthermore, since UML is formally compatible with object-oriented computing languages, instantiation of (M,R) in UML strongly implies its computability in object-oriented coding languages.

  8. The language of worry: examining linguistic elements of worry models.

    Science.gov (United States)

    Geronimi, Elena M C; Woodruff-Borden, Janet

    2015-01-01

    Despite strong evidence that worry is a verbal process, studies examining linguistic features in individuals with generalised anxiety disorder (GAD) are lacking. The aim of the present study is to investigate language use in individuals with GAD and controls based on GAD and worry theoretical models. More specifically, the degree to which linguistic elements of the avoidance and intolerance of uncertainty worry models can predict diagnostic status was analysed. Participants were 19 women diagnosed with GAD and 22 control women and their children. After participating in a diagnostic semi-structured interview, dyads engaged in a free-play interaction where mothers' language sample was collected. Overall, the findings provided evidence for distinctive linguistic features of individuals with GAD. That is, after controlling for the effect of demographic variables, present tense, future tense, prepositions and number of questions correctly classified those with GAD and controls such that a considerable amount of the variance in diagnostic status was explained uniquely by language use. Linguistic confirmation of worry models is discussed.

  9. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

    Science.gov (United States)

    Morse, Anthony F; Cangelosi, Angelo

    2017-02-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.

  10. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    Science.gov (United States)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  11. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  12. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  13. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  14. The Adoption of Mark-Up Tools in an Interactive e-Textbook Reader

    Science.gov (United States)

    Van Horne, Sam; Russell, Jae-eun; Schuh, Kathy L.

    2016-01-01

    Researchers have more often examined whether students prefer using an e-textbook over a paper textbook or whether e-textbooks provide a better resource for learning than paper textbooks, but students' adoption of mark-up tools has remained relatively unexamined. Drawing on the concept of Innovation Diffusion Theory, we used educational data mining…

  15. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  16. The Impact of the "First Language First" Model on Vocabulary Development among Preschool Bilingual Children

    Science.gov (United States)

    Schwartz, Mila

    2014-01-01

    The aim of this exploratory study was to examine the role of the "First Language First" model for preschool bilingual education in the development of vocabulary depth. The languages studied were Russian (L1) and Hebrew (L2) among bilingual children aged 4-5 years in Israel. According to this model, the children's first language of…

  17. Ambiguity and Incomplete Information in Categorical Models of Language

    Directory of Open Access Journals (Sweden)

    Dan Marsden

    2017-01-01

    Full Text Available We investigate notions of ambiguity and partial information in categorical distributional models of natural language. Probabilistic ambiguity has previously been studied using Selinger's CPM construction. This construction works well for models built upon vector spaces, as has been shown in quantum computational applications. Unfortunately, it doesn't seem to provide a satisfactory method for introducing mixing in other compact closed categories such as the category of sets and binary relations. We therefore lack a uniform strategy for extending a category to model imprecise linguistic information. In this work we adopt a different approach. We analyze different forms of ambiguous and incomplete information, both with and without quantitative probabilistic data. Each scheme then corresponds to a suitable enrichment of the category in which we model language. We view different monads as encapsulating the informational behaviour of interest, by analogy with their use in modelling side effects in computation. Previous results of Jacobs then allow us to systematically construct suitable bases for enrichment. We show that we can freely enrich arbitrary dagger compact closed categories in order to capture all the phenomena of interest, whilst retaining the important dagger compact closed structure. This allows us to construct a model with real convex combination of binary relations that makes non-trivial use of the scalars. Finally we relate our various different enrichments, showing that finite subconvex algebra enrichment covers all the effects under consideration.

  18. The Language of Flexible Reuse; Reuse, Portability and Interoperability of Learning Content or Why an Educational Modelling Language

    NARCIS (Netherlands)

    Sloep, Peter

    2003-01-01

    Sloep, P.B. (2004). Reuse, Portability and Interoperability of Learning Content: Or Why an Educational Modelling Language. In R. McGreal, (Ed.), Online Education Using Learning Objects (pp. 128-137). London: Routledge/Falmer.

  19. A Query Language for Formal Mathematical Libraries

    CERN Document Server

    Rabe, Florian

    2012-01-01

    One of the most promising applications of mathematical knowledge management is search: Even if we restrict attention to the tiny fragment of mathematics that has been formalized, the amount exceeds the comprehension of an individual human. Based on the generic representation language MMT, we introduce the mathematical query language QMT: It combines simplicity, expressivity, and scalability while avoiding a commitment to a particular logical formalism. QMT can integrate various search paradigms such as unification, semantic web, or XQuery style queries, and QMT queries can span different mathematical libraries. We have implemented QMT as a part of the MMT API. This combination provides a scalable indexing and query engine that can be readily applied to any library of mathematical knowledge. While our focus here is on libraries that are available in a content markup language, QMT naturally extends to presentation and narration markup languages.

  20. Computational cognitive modeling for the diagnosis of Specific Language Impairment.

    Science.gov (United States)

    Oliva, Jesus; Serrano, J Ignacio; del Castillo, M Dolores; Iglesias, Angel

    2013-01-01

    Specific Language Impairment (SLI), as many other cognitive deficits, is difficult to diagnose given its heterogeneous profile and its overlap with other impairments. Existing techniques are based on different criteria using behavioral variables on different tasks. In this paper we propose a methodology for the diagnosis of SLI that uses computational cognitive modeling in order to capture the internal mechanisms of the normal and impaired brain. We show that machine learning techniques that use the information of these models perform better than those that only use behavioral variables.

  1. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  2. Text generation from Taiwanese Sign Language using a PST-based language model for augmentative communication.

    Science.gov (United States)

    Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang

    2004-12-01

    This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.

  3. Improved head-driven statistical models for natural language parsing

    Institute of Scientific and Technical Information of China (English)

    袁里驰

    2013-01-01

    Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.

  4. New Retrieval Method Based on Relative Entropy for Language Modeling with Different Smoothing Methods

    Institute of Scientific and Technical Information of China (English)

    Huo Hua; Liu Junqiang; Feng Boqin

    2006-01-01

    A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies of estimated document language models with respect to the estimated query language model. Two popular and relatively efficient smoothing methods, the JelinekMercer method and the absolute discounting method, are used to smooth the document language model in estimation of the document language. A combined model composed of the feedback document language model and the collection language model is used to estimate the query model. A performacne comparison between the new retrieval method and the existing method with feedback is made,and the retrieval performances of the proposed method with the two different smoothing techniques are evaluated on three Text Retrieval Conference (TREC) data sets. Experimental results show that the method is effective and performs better than the basic language modeling approach; moreover, the method using the Jelinek-Mercer technique performs better than that using the absolute discounting technique, and the perfomance is sensitive to the smoothing paramters.

  5. Modulation of Language Switching by Cue Timing: Implications for Models of Bilingual Language Control

    Science.gov (United States)

    Khateb, Asaid; Shamshoum, Rana; Prior, Anat

    2017-01-01

    The current study examines the interplay between global and local processes in bilingual language control. We investigated language-switching performance of unbalanced Arabic-Hebrew bilinguals in cued picture naming, using 5 different cuing parameters. The language cue could precede the picture, follow it, or appear simultaneously with it. Naming…

  6. Linguistic Evolution through Language Acquisition: Formal and Computational Models.

    Science.gov (United States)

    Briscoe, Ted, Ed.

    This collection of papers examines how children acquire language and how this affects language change over the generations. It proceeds from the basis that it is important to address not only the language faculty per se within the framework of evolutionary theory, but also the origins and subsequent development of languages themselves, suggesting…

  7. Phase transition in a sexual age-structured model of learning foreign languages

    CERN Document Server

    Schwämmle, V

    2005-01-01

    The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition where the ratio between the number of speakers of different languages is the order parameter and the mutation rate is the control one.

  8. On Combining Language Models to Improve a Text-based Human-machine Interface

    Directory of Open Access Journals (Sweden)

    Daniel Cruz Cavalieri

    2015-12-01

    Full Text Available This paper concentrates on improving a text-based human-machine interface integrated into a robotic wheelchair. Since word prediction is one of the most common methods used in such systems, the goal of this work is to improve the results using this specific module. For this, an exponential interpolation language model (LM is considered. First, a model based on partial differential equations is proposed; with the appropriate initial conditions, we are able to design a interpolation language model that merges a word-based n-gram language model and a part-of-speech-based language model. Improvements in keystroke saving (KSS and perplexity (PP over the word-based ngram language model and two other traditional interpolation models are obtained, considering two different task domains and three different languages. The proposed interpolation model also provides additional improvements over the hit rate (HR parameter.

  9. Models, Languages and Logics for Concurrent Distributed Systems

    DEFF Research Database (Denmark)

    The EEC Esprit Basic Research Action No 3011, Models, Languages and Logics for Con current Distributed Systems, CEDISYS, held its second workshop at Aarhus University in May, l991, following the successful workshop in San Miniato in 1990. The Aarhus Workshop was centered around CEDISYS research...... activities, and the selected themes of Applications and Automated Tools in the area of Distributed Systerns. The 24 participants were CEDISYS partners, and invited guests with expertise on the selected themes. This booklet contains the program of the workshop, short abstracts for the talks presented...

  10. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  11. The First Language in the Foreign Language Classroom: Teacher Model and Student Language Use--An Exploratory Study

    Science.gov (United States)

    Chavez, Monika

    2016-01-01

    This study investigates how three teachers differed in the amount of first language (L1; here, English) they used during teacher-led instruction in a foreign language (FL; here, German) class and whether differences in the three teachers' L1 were associated with similar differences in their respective students' L1 use, both during teacher-led…

  12. RSMM: a network language for modeling pollutants in river systems

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.B.; Standridge, C.R.; Schnoor, J.L.

    1983-06-01

    Predicting the steady state distribution of pollutants in rivers is important for water quality managers. A new simulation language, the River System Modeling Methodology (RSMM), helps users construct simulation models for analyzing river pollution. In RSMM, a network of nodes and branches represents a river system. Nodes represent elements such as junctions, dams, withdrawals, and pollutant sources; branches represent homogeneous river segments, or reaches. The RSMM processor is a GASP V program. Models can employ either the embedded Streeter-Phelps equations or user supplied equations. The user describes the network diagram with GASP-like input cards. RSMM outputs may be printed or stored in an SDL database. An interface between SDL and DISSPLA provides high quality graphical output.

  13. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  14. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  15. The Agent Modeling Language AML A Comprehensive Approach to Modeling Multi-agent Systems

    CERN Document Server

    Cervenka, Radovan

    2007-01-01

    Multi-agent systems are already a focus of studies for more than 25 years. Despite substantial effort of an active research community, modeling of multi-agent systems still lacks complete and proper definition, general acceptance, and practical application. Due to the vast potential of these systems e.g., to improve the practice in software and to extent the applications that can feasibly be tackled, this book tries to provide a comprehensive modeling language - the Agent Modeling Language (AML) - as an extension of UML 2.0, concentrating on multi-agent systems and applications.

  16. Declarative business process modelling: principles and modelling languages

    Science.gov (United States)

    Goedertier, Stijn; Vanthienen, Jan; Caron, Filip

    2015-02-01

    The business process literature has proposed a multitude of business process modelling approaches or paradigms, each in response to a different business process type with a unique set of requirements. Two polar paradigms, i.e. the imperative and the declarative paradigm, appear to define the extreme positions on the paradigm spectrum. While imperative approaches focus on explicitly defining how an organisational goal should be reached, the declarative approaches focus on the directives, policies and regulations restricting the potential ways to achieve the organisational goal. In between, a variety of hybrid-paradigms can be distinguished, e.g. the advanced and adaptive case management. This article focuses on the less-exposed declarative approach on process modelling. An outline of the declarative process modelling and the modelling approaches is presented, followed by an overview of the observed declarative process modelling principles and an evaluation of the declarative process modelling approaches.

  17. A novel dependency language model for information retrieval

    Institute of Scientific and Technical Information of China (English)

    CAI Ke-ke; BU Jia-jun; CHEN Chun; QIU Guang

    2007-01-01

    This paper explores the application of term dependency in information retrieval (IR) and proposes a novel dependency retrieval model. This retrieval model suggests an extension to the existing language modeling (LM) approach to IR by introducing dependency models for both query and document. Relevance between document and query is then evaluated by reference to the Kullback-Leibler divergence between their dependency models. This paper introduces a novel hybrid dependency structure, which allows integration of various forms of dependency within a single framework. A pseudo relevance feedback based method is also introduced for constructing query dependency model. The basic idea is to use query-relevant top-ranking sentences extracted from the top documents at retrieval time as the augmented representation of query, from which the relationships between query terms are identified. A Markov Random Field (MRF) based approach is presented to ensure the relevance of the extracted sentences,which utilizes the association features between query terms within a sentence to evaluate the relevance of each sentence. This dependency retrieval model was compared with other traditional retrieval models. Experiments indicated that it produces significant improvements in retrieval effectiveness.

  18. An Iterative Algorithm to Build Chinese Language Models

    CERN Document Server

    Luo, X; Luo, Xiaoqiang; Roukos, Salim

    1996-01-01

    We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.

  19. A Modeling Language Based on UML for Modeling Simulation Testing System of Avionic Software

    Institute of Scientific and Technical Information of China (English)

    WANG Lize; LIU Bin; LU Minyan

    2011-01-01

    With direct expression of individual application domain patterns and ideas, domain-specific modeling language (DSML) is more and more frequently used to build models instead of using a combination of one or more general constructs. Based on the profile mechanism of unified modeling language (UML) 2.2, a kind of DSML is presented to model simulation testing systems of avionic software (STSAS). To define the syntax, semantics and notions of the DSML, the domain model of the STSAS from which we generalize the domain concepts and relationships among these concepts is given, and then, the domain model is mapped into a UML meta-model, named UML-STSAS profile. Assuming a flight control system (FCS) as system under test (SUT), we design the relevant STSAS. The results indicate that extending UML to the simulation testing domain can effectively and precisely model STSAS.

  20. A Model of a Generic Natural Language Interface for Querying Database

    Directory of Open Access Journals (Sweden)

    Hanane Bais

    2016-02-01

    Full Text Available Extracting information from database is typically done by using a structured language such as SQL (Structured Query Language. But non expert users can’t use this later. Wherefore using Natural Language (NL for communicating with database can be a powerful tool. But without any help, computers can’t understand this language; that is why it is essential to develop an interface able to translate user’s query given in NL into an equivalent one in Database Query Language (DBQL. This paper presents a model of a generic natural language query interface for querying database. This model is based on machine learning approach which allows interface to automatically improve its knowledge base through experience. The advantage of this interface is that it functions independently of the database language, content and model. Experimentations are realized to study the performance of this interface and make necessary corrections for its amelioration

  1. The logical foundations of scientific theories languages, structures, and models

    CERN Document Server

    Krause, Decio

    2016-01-01

    This book addresses the logical aspects of the foundations of scientific theories. Even though the relevance of formal methods in the study of scientific theories is now widely recognized and regaining prominence, the issues covered here are still not generally discussed in philosophy of science. The authors focus mainly on the role played by the underlying formal apparatuses employed in the construction of the models of scientific theories, relating the discussion with the so-called semantic approach to scientific theories. The book describes the role played by this metamathematical framework in three main aspects: considerations of formal languages employed to axiomatize scientific theories, the role of the axiomatic method itself, and the way set-theoretical structures, which play the role of the models of theories, are developed. The authors also discuss the differences and philosophical relevance of the two basic ways of aximoatizing a scientific theory, namely Patrick Suppes’ set theoretical predicate...

  2. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  3. Visual unified modeling language for the composition of scenarios in modeling and simulation systems

    Science.gov (United States)

    Talbert, Michael L.; Swayne, Daniel E.

    2006-05-01

    The Department of Defense uses modeling and simulation systems in many various roles, from research and training to modeling likely outcomes of command decisions. Simulation systems have been increasing in complexity with the increased capability of low-cost computer systems to support these DOD requirements. The demand for scenarios is also increasing, but the complexity of the simulation systems has caused a bottleneck in scenario development due to the limited number of individuals with knowledge of the arcane simulator languages in which these scenarios are written. This research combines the results of previous efforts from the Air Force Institute of Technology in visual modeling languages to create a language that unifies description of entities within a scenario with its behavior using a visual tool that was developed in the course of this research. The resulting language has a grammar and syntax that can be parsed from the visual representation of the scenario. The language is designed so that scenarios can be described in a generic manner, not tied to a specific simulation system, allowing the future development of modules to translate the generic scenario into simulation system specific scenarios.

  4. Modeling second language change using skill retention theory

    OpenAIRE

    Shearer, Samuel R.

    2013-01-01

    Approved for public release; distribution is unlimited Loss of foreign language proficiency is a major concern for the Department of Defense (DoD). Despite significant expenditures to develop and sustain foreign language skills in the armed forces, the DoD has not been able to create a sufficient pool of qualified linguists. Many theories and hypotheses about the learning of foreign languages are not based on cognitive processes and lack the ability to explain how and why foreign language ...

  5. Describing Generic Ocean Environmental Data Objects Using the Geography Markup Language

    Science.gov (United States)

    2004-07-01

    devrait rendre plus efficace la génération de produits avantageux pour la marine. GML est également important du point de vue de l’Organisation du...gml:description> BIO </gml:description> <gml:name codeSpace="www.meds.ca/namelist">Agency</gml:name> </additional_metadata...once we define a metadata property such as an “Agency”, we consider the content of the defined “Agency” to be “ BIO ”. 16 DRDC Atlantic TM 2004-087

  6. Development of Semantic Web - Markup Languages, Web Services, Rules, Explanation, Querying, Proof and Reasoning

    Science.gov (United States)

    2008-07-01

    KSL-06-16.html • J. William Murdock, Deborah L. McGuinness, Paulo Pinheiro da Silva, Chris Welty , and David Ferrucci. Explaining Conclusions from...Christopher Welty , J. William Murdock, Paulo Pinheiro da Silva, Deborah L. McGuinness, David Ferrucci, Richard Fikes. Tracking Information Extraction from...services/swsl/requirements/swsl-requirements.shtml. • J. William Murdock, Paulo Pinheiro da Silva, David Ferrucci, Christopher Welty and Deborah L

  7. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    Science.gov (United States)

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  8. KML (Keyhole Markup Language) : a key tool in the education of geo-resources.

    Science.gov (United States)

    Veltz, Isabelle

    2015-04-01

    Although going on the ground with pupils remains the best way to understand the geologic structure of a deposit, it is very difficult to bring them in a mining extraction site and it is impossible to explore whole regions in search of these resources. For those reasons the KML (with the Google earth interface) is a very complete tool for teaching geosciences. Simple and intuitive, its handling is quickly mastered by the pupils, it also allows the teachers to validate skills for IT certificates. It allows the use of KML files stemming from online banks, from personal productions of the teacher or from pupils' works. These tools offer a global approach in 3D as well as a geolocation-based access to any type of geological data. The resource on which I built this KML is taught in the curriculum of the 3 years of French high school, it is methane hydrate. This non conventional hydrocarbon molecule enters in this vague border between mineral an organic matter (as phosphate deposits). It has become for over ten year the subject of the race for the exploitation of the gas hydrates fields in order to try to supply to the world demand. The methane hydrate fields are very useful and interesting to study the 3 majors themes of geological resource: the exploration, the exploitation and the risks especially for environments and populations. The KML which I propose allows the pupils to put itself in the skin of a geologist in search of deposits or on the technician who is going to extract the resource. It also allows them to evaluate the risks connected to the effect of tectonics activity or climatic changes on the natural or catastrophic releasing of methane and its role in the increase of the greenhouse effect. This KML associated to plenty of pedagogic activities is directly downloadable for teachers at http://eduterre.ens-lyon.fr/eduterre-usages/actualites/methane/.

  9. FROM OPEN SOURCE TO OPEN INFORMATION: COLLABORATIVE METHODS IN CREATING XML-BASED MARKUP LANGUAGES

    OpenAIRE

    Rehm, G; H. Lobin

    2000-01-01

    Until the beginning of the last decade, the Internet was primarily used by scientific, educational, and military organisations for the exchange of information such as data files and electronic mail. The introduction of the easy-to-use hypertext system World Wide Web (WWW) has, however, begun a new era of the world-spanning computer network. In this paper we examine a part of the Information Marketplace (Dertouzos, 1997) that will give users of the WWW a wide range of new possibilities for gat...

  10. The Gel Electrophoresis Markup Language (GelML) from the Proteomics Standards Initiative

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J. Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2011-01-01

    The Human Proteome Organisation’s Proteomics Standards Initiative (HUPO-PSI) has developed the GelML data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for mass spectrometry data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications. PMID:20677327

  11. XHTML™ 1.0 The Extensible HyperText Markup Language

    NARCIS (Netherlands)

    Pemberton, S.; et al, not CWI

    2000-01-01

    This specification defines XHTML 1.0, a reformulation of HTML 4 as an XML 1.0 application, and three DTDs corresponding to the ones defined by HTML 4. The semantics of the elements and their attributes are defined in the W3C Recommendation for HTML 4. These semantics provide the foundation for futur

  12. XHTML™ 1.0 The Extensible HyperText Markup Language

    NARCIS (Netherlands)

    S. Pemberton (Steven); not CWI et al

    2000-01-01

    textabstractThis specification defines XHTML 1.0, a reformulation of HTML 4 as an XML 1.0 application, and three DTDs corresponding to the ones defined by HTML 4. The semantics of the elements and their attributes are defined in the W3C Recommendation for HTML 4. These semantics provide the

  13. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition

    Science.gov (United States)

    2005-06-01

    rdfs:label> <rdfs:subClassOf rdf:resource="#ItalianTank"/> </owl:Class> <owl:Class rdf:ID="CentauroB1"> <rdfs:comment> Centauro B-1</rdfs:comment...rdfs:label> Centauro B-1</rdfs:label> <rdfs:subClassOf rdf:resource="#LightTankRecon"/> </owl:Class> <owl:Class rdf:ID="M1985Lighttank

  14. Analysis of the Model Checkers' Input Languages for Modeling Traffic Light Systems

    Directory of Open Access Journals (Sweden)

    Pathiah A. Samat

    2011-01-01

    Full Text Available Problem statement: Model checking is an automated verification technique that can be used for verifying properties of a system. A number of model checking systems have been developed over the last few years. However, there is no guideline that is available for selecting the most suitable model checker to be used to model a particular system. Approach: In this study, we compare the use of four model checkers: SMV, SPIN, UPPAAL and PRISM for modeling a distributed control system. In particular, we are looking at the capabilities of the input languages of these model checkers for modeling this type of system. Limitations and differences of their input language are compared and analyses by using a set of questions. Results: The result of the study shows that although the input languages of these model checkers have a lot of similarities, they also have a significant number of differences. The result of the study also shows that one model checker may be more suitable than others for verifying this type of systems Conclusion: User need to choose the right model checker for the problem to be verified.

  15. Event Modeling in UML. Unified Modeling Language and Unified Process

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2002-01-01

    We show how events can be modeled in terms of UML. We view events as change agents that have consequences and as information objects that represent information. We show how to create object-oriented structures that represent events in terms of attributes, associations, operations, state charts......, and messages. We outline a run-time environment for the processing of events with multiple participants....

  16. Developing a Language Support Model for Mainstream Primary School Teachers

    Science.gov (United States)

    McCartney, Elspeth; Ellis, Sue; Boyle, James; Turnbull, Mary; Kerr, Jane

    2010-01-01

    In the UK, speech and language therapists (SLTs) work with teachers to support children with language impairment (LI) in mainstream schools. Consultancy approaches are often used, where SLTs advise educational staff who then deliver language-learning activities. However, some research suggests that schools may not always sustain activities as…

  17. Stochastic model for the vocabulary growth in natural languages

    CERN Document Server

    Gerlach, Martin

    2013-01-01

    We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core-words which have higher frequency and do not affect the probability of a new word to be used; and (ii) the remaining virtually infinite number of noncore-words which have lower frequency and once used reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the google-ngram database of books published in the last centuries and its main consequence is the generalization of Zipf's and Heaps' law to two scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model the main change on historical time scales i...

  18. Stochastic Model for the Vocabulary Growth in Natural Languages

    Science.gov (United States)

    Gerlach, Martin; Altmann, Eduardo G.

    2013-04-01

    We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii) the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.

  19. A Hierarchical Model for Language Maintenance and Language Shift: Focus on the Malaysian Chinese Community

    Science.gov (United States)

    Wang, Xiaomei; Chong, Siew Ling

    2011-01-01

    Social factors involved in language maintenance and language shift (LMLS) have been the focus of LMLS studies. Previous studies provide fundamental support for the theoretical development of this research branch. However, there is no discussion regarding the hierarchical order of these social factors, i.e. the degree of importance of various…

  20. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  1. Cross-Linguistic Influence in Non-Native Languages: Explaining Lexical Transfer Using Language Production Models

    Science.gov (United States)

    Burton, Graham

    2013-01-01

    The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…

  2. UML as a cell and biochemistry modeling language.

    Science.gov (United States)

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  3. Cross-language linking of news stories on the web using interlingual topic modelling

    OpenAIRE

    De Smet, Wim; Moens, Marie-Francine

    2009-01-01

    We have studied the problem of linking event information across different languages without the use of translation systems or dictionaries. The linking is based on interlingua information obtained through probabilistic topic models trained on comparable corpora written in two languages (in our case English and Dutch). To achieve this goal, we expand the Latent Dirichlet Allocation model to process documents in two languages. We demonstrate the validity of the learned interlingual topics in a...

  4. Towards a continuous population model for natural language vowel shift.

    Science.gov (United States)

    Shipman, Patrick D; Faria, Sérgio H; Strickland, Christopher

    2013-09-07

    The Great English Vowel Shift of 16th-19th centuries and the current Northern Cities Vowel Shift are two examples of collective language processes characterized by regular phonetic changes, that is, gradual changes in vowel pronunciation over time. Here we develop a structured population approach to modeling such regular changes in the vowel systems of natural languages, taking into account learning patterns and effects such as social trends. We treat vowel pronunciation as a continuous variable in vowel space and allow for a continuous dependence of vowel pronunciation in time and age of the speaker. The theory of mixtures with continuous diversity provides a framework for the model, which extends the McKendrick-von Foerster equation to populations with age and phonetic structures. We develop the general balance equations for such populations and propose explicit expressions for the factors that impact the evolution of the vowel pronunciation distribution. For illustration, we present two examples of numerical simulations. In the first one we study a stationary solution corresponding to a state of phonetic equilibrium, in which speakers of all ages share a similar phonetic profile. We characterize the variance of the phonetic distribution in terms of a parameter measuring a ratio of phonetic attraction to dispersion. In the second example we show how vowel shift occurs upon starting with an initial condition consisting of a majority pronunciation that is affected by an immigrant minority with a different vowel pronunciation distribution. The approach developed here for vowel systems may be applied also to other learning situations and other time-dependent processes of cognition in self-interacting populations, like opinions or perceptions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    Science.gov (United States)

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  6. Advances in Modeling Languages%建模语言及其研究进展初谈

    Institute of Scientific and Technical Information of China (English)

    戴桂兰

    2002-01-01

    Based on the common features of modeling languages,the paper attempts to present a definition,description content,application domains for modeling languages.In terms of their derivation and foundation,modeling languages can be divided into four kinds:formal modeling languages,graphic modeling languages,integration graphic and formalization,and meta-modeling languages,form which we outline their representative work,the state of the atr,fundamental issues,and development directions for the further researches.

  7. Uncertainty Modeling Based on Bayesian Network in Ontology Mapping

    Institute of Scientific and Technical Information of China (English)

    LI Yuhua; LIU Tao; SUN Xiaolin

    2006-01-01

    How to deal with uncertainty is crucial in exact concept mapping between ontologies. This paper presents a new framework on modeling uncertainty in ontologies based on bayesian networks (BN). In our approach, ontology Web language (OWL) is extended to add probabilistic markups for attaching probability information, the source and target ontologies (expressed by patulous OWL) are translated into bayesian networks (BNs), the mapping between the two ontologies can be digged out by constructing the conditional probability tables (CPTs) of the BN using a improved algorithm named I-IPFP based on iterative proportional fitting procedure (IPFP). The basic idea of this framework and algorithm are validated by positive results from computer experiments.

  8. Distance Learning Class Model for Teaching a Foreign Language in University-Level Education Context

    Science.gov (United States)

    Lee, Sun-Min

    2012-01-01

    This study aims to introduce the distance learning class model for a foreign language in university-level education context, and to prove that this class model is effective in cultivating the motivation and interest of university students for learning a foreign language. This distance learning lesson consists of two parts: Online chatting session,…

  9. An ontology-based approach for evaluating the domain appropriateness and comprehensibility appropriateness of modeling languages

    NARCIS (Netherlands)

    Guizzardi, G.; Ferreira Pires, Luis; van Sinderen, Marten J.; Briand, L.; Williams, C.

    2005-01-01

    In this paper we present a framework for the evaluation and (re)design of modeling languages. We focus here on the evaluation of the suitability of a language to model a set or real-world phenomena in a given domain. In our approach, this property can be systematically evaluated by comparing the

  10. Co-Teaching: Towards a New Model for Teacher Preparation in Foreign Language Teacher Education

    Science.gov (United States)

    Altstaedter, Laura Levi; Smith, Judith J.; Fogarty, Elizabeth

    2016-01-01

    This overview article focuses on the co-teaching model, an innovative and comprehensive model for student teaching experiences that provides opportunities to foreign language preservice teachers to develop their knowledge base about teaching and learning foreign languages while gaining in other areas: autonomy, collaboration, and agency. The…

  11. Co-Teaching: Towards a New Model for Teacher Preparation in Foreign Language Teacher Education

    Science.gov (United States)

    Altstaedter, Laura Levi; Smith, Judith J.; Fogarty, Elizabeth

    2016-01-01

    This overview article focuses on the co-teaching model, an innovative and comprehensive model for student teaching experiences that provides opportunities to foreign language preservice teachers to develop their knowledge base about teaching and learning foreign languages while gaining in other areas: autonomy, collaboration, and agency. The…

  12. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    Science.gov (United States)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  13. Markup of temporal information in electronic health records.

    Science.gov (United States)

    Hyun, Sookyung; Bakken, Suzanne; Johnson, Stephen B

    2006-01-01

    Temporal information plays a critical role in the understanding of clinical narrative (i.e., free text). We developed a representation for marking up temporal information in a narrative, consisting of five elements: 1) reference point, 2) direction, 3) number, 4) time unit, and 5) pattern. We identified 254 temporal expressions from 50 discharge summaries and represented them using our scheme. The overall inter-rater reliability among raters applying the representation model was 75 percent agreement. The model can contribute to temporal reasoning in computer systems for decision support, data mining, and process and outcomes analyses by providing structured temporal information.

  14. A Model for Community-based Language Teaching to Young Learners: The Impact of University Outreach

    Directory of Open Access Journals (Sweden)

    Martha Nyikos

    2015-01-01

    Full Text Available A primary challenge given to university foreign language departments and Title VI National Resource Centers is to increase interest and participation in foreign language learning, with particular emphasis on less commonly taught languages (LCTLs. Given that many LCTLs in high demand by the US government, including Arabic, Chinese, Persian and Turkish, rarely find their way into the school curricula, this article offers a successful ongoing community-based model of how one university-town partnership addresses advocacy with programming for pre-K-grade 9. Non-native and heritage undergraduate language students who volunteered as community language teachers found the experience invaluable to their pedagogical development. Teacher education programs or language departments can employ this approach to community-based teaching, by providing free, sustained language teaching in existing community centers. This article offers guidance for how to start and expand such a program.

  15. Agent Based Models of Language Competition: Macroscopic descriptions and Order-Disorder transitions

    CERN Document Server

    Vazquez, F; Miguel, M San

    2010-01-01

    We investigate the dynamics of two agent based models of language competition. In the first model, each individual can be in one of two possible states, either using language $X$ or language $Y$, while the second model incorporates a third state XY, representing individuals that use both languages (bilinguals). We analyze the models on complex networks and two-dimensional square lattices by analytical and numerical methods, and show that they exhibit a transition from one-language dominance to language coexistence. We find that the coexistence of languages is more difficult to maintain in the Bilinguals model, where the presence of bilinguals in use facilitates the ultimate dominance of one of the two languages. A stability analysis reveals that the coexistence is more unlikely to happen in poorly-connected than in fully connected networks, and that the dominance of only one language is enhanced as the connectivity decreases. This dominance effect is even stronger in a two-dimensional space, where domain coar...

  16. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    Science.gov (United States)

    2014-06-01

    model of the system (Friendenthal, Moore and Steiner 2008, 17). The premise is that maintaining a logical and consistent model can be accomplished...Standard for Exchange of Product data (STEP) subgroup of ISO, and defines a standard data format for certain types of SE information ( Johnson 2006...search.credoreference.com/content/entry/encyccs/formal_languages/0. Friedenthal, Sanford, Alan Moore, and Rick Steiner . 2008. A Practical Guide to SysML

  17. The Cummins model: a framework for teaching nursing students for whom English is a second language.

    Science.gov (United States)

    Abriam-Yago, K; Yoder, M; Kataoka-Yahiro, M

    1999-04-01

    The health care system requires nurses with the language ability and the cultural knowledge to meet the health care needs of ethnic minority immigrants. The recruitment, admission, retention, and graduation of English as a Second Language (ESL) students are essential to provide the workforce to meet the demands of the multicultural community. Yet, ESL students possess language difficulties that affect their academic achievement in nursing programs. The application of the Cummins Model of language proficiency is discussed. The Cummins Model provides a framework for nursing faculty to develop educational support that meets the learning needs of ESL students.

  18. Network simulation using the simulation language for alternate modeling (SLAM 2)

    Science.gov (United States)

    Shen, S.; Morris, D. W.

    1983-01-01

    The simulation language for alternate modeling (SLAM 2) is a general purpose language that combines network, discrete event, and continuous modeling capabilities in a single language system. The efficacy of the system's network modeling is examined and discussed. Examples are given of the symbolism that is used, and an example problem and model are derived. The results are discussed in terms of the ease of programming, special features, and system limitations. The system offers many features which allow rapid model development and provides an informative standardized output. The system also has limitations which may cause undetected errors and misleading reports unless the user is aware of these programming characteristics.

  19. Foreign Languages in Vienna Primary Schools – in Mainstream Education and in Current Experimental Models

    Directory of Open Access Journals (Sweden)

    Renate Seebauer

    2015-05-01

    Full Text Available Starting with a short retrospective and an overview of the incorporation of foreign language learning into the Austrian curriculum for primary schools the paper in hand describes the current situation of foreign language teaching in primary schools as well as various current models of foreign language education in primary schools in Vienna (school year 1 to 4. In terms of objectives these models exceed the requirements of the curriculum of the formal education system or regard themselves as quantitative and qualitative enrichment. They follow different didactic approaches and/or site-specific characteristics and needs. The formulation of basic skills is to be understood as an attempt to find a common basis of output indicators and to facilitate the transition to secondary education. Although English is the most commonly chosen resp. offered language the paper also refers to school experiments that focus on Romance or Slavic languages or on languages of Austra’s neighbouring countries.

  20. Exploiting Language Models to Classify Events from Twitter

    Directory of Open Access Journals (Sweden)

    Duc-Thuan Vo

    2015-01-01

    Full Text Available Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP, which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets’ features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events.

  1. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation

    National Research Council Canada - National Science Library

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility...

  2. The role of markup for enabling interoperability in health informatics.

    Science.gov (United States)

    McKeever, Steve; Johnson, David

    2015-01-01

    Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realized on a variety of physiological and health care use cases. The last 15 years has seen the rise of very cheap digital storage both on and off site. With the advent of the Internet of Things people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to in silico modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realized. We conclude with a discussion on future possibilities that big data and further standardizations will enable.

  3. Mother Tongue Use in Task-Based Language Teaching Model

    Science.gov (United States)

    Hung, Nguyen Viet

    2012-01-01

    Researches of English language teaching (ELT) have focused on using mother tongue (L1) for years. The proliferation of task-based language teaching (TBLT) has been also occurred. Considerable findings have been made in the existing literature of the two fields; however, no mentions have been made in the combination of these two ELT aspects, i.e.,…

  4. A Model for Community-based Language Teaching to Young Learners: The Impact of University Outreach

    OpenAIRE

    Martha Nyikos; Vesna Dimitrieska

    2015-01-01

    A primary challenge given to university foreign language departments and Title VI National Resource Centers is to increase interest and participation in foreign language learning, with particular emphasis on less commonly taught languages (LCTLs). Given that many LCTLs in high demand by the US government, including Arabic, Chinese, Persian and Turkish, rarely find their way into the school curricula, this article offers a successful ongoing community-based model of how one university-town par...

  5. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    OpenAIRE

    Sirinoot Boonsuk; Atiwong Suchato; Proadpran Punyabukkana; Chai Wutiwiwatchai; Nattanun Thatphithakkul

    2014-01-01

    Spoken language recognition (SLR) has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in apply...

  6. Analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    CERN Document Server

    Otero-Espinar, Victoria; Nieto, Juan J; Mira, Jorge

    2013-01-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also sug...

  7. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  8. Perancangan Aplikasi Informasi SMS untuk Alumni Unsoed Menggunakan UML (Unified Modeling Language

    Directory of Open Access Journals (Sweden)

    Bangun Wijayanto

    2007-02-01

    Full Text Available Unified Modeling Language (UML is a language which have come to the standard in industry to visualize, design and document the software system. Using UML we can make model for All software application type, where the application can also written in many language. SMS (Short Message Service is the best choice to solve geographic problems in spreading information to the alumni Unsoed. The aim of this research is to compile notation of UML (Unified Modeling Language in development of SMS Server for Alumni Unsoed. This research is conducted with software engineer method. The design result of software SMS alumni Unsoed present that UML (Unified Modeling Language help in design and software programming

  9. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK... ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK...to model-based systems engineering (MBSE) by formally defining an MBSE methodology for employing architecture in system analysis (MEASA) that presents

  10. Developing a blended learning based model for teaching foreign languages in engineering institutions

    Directory of Open Access Journals (Sweden)

    Kudryashova Alexandra V.

    2016-01-01

    Full Text Available The present paper deals with studying theoretical and methodical background of the concept of blended learning, which is the major didactic tool of the modern methods of foreign languages teaching. It also considers the principles of integrating blended learning in teaching foreign languages in engineering institutions. The basics of pedagogical modelling used for developing a model of integrating blended learning in the foreign language teaching are defined. The schematic model representation is given and the way of implementing the described model into the educational process is shown via the example of the lesson on “Cohesive devices”.

  11. iPad: Semantic annotation and markup of radiological images.

    Science.gov (United States)

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  12. A conceptual data model and modelling language for fields and agents

    Science.gov (United States)

    de Bakker, Merijn; de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    Modelling is essential in order to understand environmental systems. Environmental systems are heterogeneous because they consist of fields and agents. Fields have a value defined everywhere at all times, for example surface elevation and temperature. Agents are bounded in space and time and have a value only within their bounds, for example biomass of a tree crown or the speed of a car. Many phenomena have properties of both fields and agents. Although many systems contain both fields and agents and integration of these concepts would be required for modelling, existing modelling frameworks concentrate on either agent-based or field-based modelling and are often low-level programming frameworks. A concept is lacking that integrates fields and agents in a way that is easy to use for modelers who are not software engineers. To address this issue, we develop a conceptual data model that represents fields and agents uniformly. We then show how the data model can be used in a high-level modelling language. The data model represents fields and agents in space-time. Also relations and networks can be represented using the same concepts. Using the conceptual data model we can represent static and mobile agents that may have spatial and temporal variation within their extent. The concepts we use are phenomenon, property set, item, property, domain and value. The phenomenon is the thing that is modelled, which can be any real world thing, for example trees. A phenomenon usually consists of several items, e.g. single trees. The domain is the spatiotemporal location and/or extent for which the items in the phenomenon are defined. Multiple different domains can coexist for a given phenomenon. For example a domain describing the extent of the trees and a domain describing the stem locations. The same goes for the property, which is an attribute of the thing that is being modeled. A property has a value, which is possibly discretized, for example the biomass over the tree crown

  13. Acoustic Model Adaptation for Indonesian Language Utterance Training System

    Directory of Open Access Journals (Sweden)

    Linda Indrayanti

    2010-01-01

    Full Text Available Problem statement: In order to build an utterance training system for Indonesian language, a speech recognition system designed for Indonesian is necessary. However, the system hardly works well due to the pronunciation variants of non-native utterances may lead to substitution/deletion error. This research investigated the pronunciation variant and proposes acoustic model adaptation to improve performance of the system. Approach: The proposed acoustic model adaptation worked in three steps: to analyze pronunciation variant with knowledge-based and data-derived methods; to align knowledge-based and data-derived results in order to list frequently mispronounced phones with their variants; to perform a state-clustering procedure with the list obtained from the second step. Further, three Speaker Adaptation (SA techniques were used in combination with the acoustic model adaptation and they are compared each other. In order to evaluate and tune the adaptation techniques, perceptual-based evaluation by three human raters is performed to obtain the "true"recognition results. Results: The proposed method achieved an average gain in Hit + Rejection (the percentage of correctly accepted and correctly rejected utterances by the system as the human raters do of 2.9 points and 2 points for native and non-native subjects, respectively, when compared with the system without adaptation. Average gains of 12.7 and 6.2 points for native and non-native students in Hit + Rejection were obtained by combining SA to the acoustic model adaptation. Conclusion/Recommendations: Performance evaluation of the adapted system demonstrated that the proposed acoustic model adaptation can improve Hit even though there is a slight increase of False Alarm (FA, the percentage of incorrectly accepted utterances by the system of which the human raters reject. The performance of the proposed acoustic model adaptation depends strongly on the effectiveness of state-clustering procedure

  14. A Qualitative Study of Domain Specific Languages for Model Driven Security

    Directory of Open Access Journals (Sweden)

    Muhammad Qaiser Saleem

    2014-05-01

    Full Text Available In Model-Driven development, software system design is represented through models which are created using general purpose modeling languages e.g., UML. Later on system artifacts are automatically generated from these models. Model-Driven Security is a specialization of Model-Driven paradigm towards the domain of security, where security objectives are modeled along the system models and security infrastructures are directly generated from these models. Currently available general purpose modeling languages like UML do not have capability to model the security objectives along the system models. Over the past decade, many researchers are trying to address these limitations of the general purpose modeling languages and come up with several Domain Specific Modeling Languages for Model Driven Security. In this study, a comparative study is presented regarding the security Domain Specific Modeling Languages presented by the most prominent researchers for the development of secure system. A success criteria has been defined and these DSLs are critically analyzed based on it to obtain the qualitative results.

  15. Modelling the language shift process of Hispanic immigrants.

    Science.gov (United States)

    Veltman, C

    1988-01-01

    Using data from the 1976 Survey of Income and Education, this article examines the language shift process of 3455 persons of Spanish mother tongue who were not born in the US. Findings show that 1) nearly 1/3 of the Spanish language immigrants arrive in the US in their middle teens to their early 20s, over 20% arrive before age 10, and less than 20% arrive after age 35; 2) the longer the length of stay, the more extensive the adoption of the English language; 3) the process of anglicization begins soon after arrival (within 18 months, 6.6% had made English their usual personal language); 4) some who had arrived before 1970 had become English monolinguals; 5) 20% of each immigrant group will remain essentially monolingual in Spanish, even after prolonged US residence; 6) the language shift process is more or less completed within about 15 years; 7) the younger the person at the time of arrival, the more extensive the shift to English; and 8) no additional language shift occurs after some 15 years of residence in the US. Approximately 80% of those aged 15-24 at time of arrival, 70% of those aged 25-34, 50% of those aged 35-44, and 30% of those aged 45 or more will come to speak English on a regular basis. Except for the oldest group, about 20% of those in each cohort will have been anglicized. Mexican and Puerto Rican immigrants have somewhat lower rates of language shift of all types to English than do Cuban, Central American, and other Hispanic immigrants. Spanish language immigrants are resisting neither the learning of English nor its adoption as a preferred usual language. However, the full impact of the anglicization occurring at any given time will not be felt until the next generation.

  16. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  17. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech‐based human‐machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  18. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  19. On religion and language evolutions seen through mathematical and agent based models

    CERN Document Server

    Ausloos, M

    2011-01-01

    (shortened version) Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also a characteristics of the human kind. New religions, new languages appear though others disappear. All religions and languages evolve when they adapt to the society developments. On the other hand, the number of adherents of a given religion, the number of persons speaking a language is not fixed. Several questions can be raised. E.g. from a macroscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve?. From a microscopic view point: can one invent agent based models to describe macroscopic aspects? Does it exist simple evolution equations? It is intuitively accepted, but also found through from statistical analysis of the frequency distribution that an ...

  20. You Just Want to Be Like that Teacher: Modelling and Intercultural Competence in Young Language Learners

    Science.gov (United States)

    Moloney, Robyn

    2008-01-01

    Language teachers are called upon to understand both the nature of students' intercultural competence and their own role in its development. Limited research attention has been paid to the relationship between the types of behaviour that language teachers model and the intercultural competence their students acquire. This article reports on a case…

  1. Multi-language Development Enviroments:Design Space, Models, Prototypes, Experiences

    OpenAIRE

    Pfeiffer, Rolf-Helge

    2013-01-01

    Non-trivial software systems are constructed out of many artifacts expressedin multiple modeling and programming languages, describing different systemaspects on different levels of abstraction. I call such systems multi-languagesoftware systems. Even though artifacts constituting multi-language softwaresystems are heavily interrelated, existing development environments do notsufficiently support developers in development of such systems. In particular,handling relations between heterogeneous...

  2. Modeling the Process of Summary Writing of Chinese Learners of English as a Foreign Language

    Science.gov (United States)

    Li, Jiuliang

    2016-01-01

    In language learning contexts, writing tasks that involve reading of source texts are often used to elicit more authentic integrative language use. Thus, interests in researching these read-to-write tasks in general and as assessment tasks keep growing. This study examined and modeled the process of summary writing as a read-to-write integrated…

  3. Computational models of language diversity: a determination of social and individual benefits

    NARCIS (Netherlands)

    Jorna, R.J.; Faber, N.R.; Jorna, R.J.; Liu, K.; Faber, N.R.

    2011-01-01

    In studying benefits and costs of language diversity the use of computer models is rare. There may be various reasons for this situation, e.g. a) the complexity of the interpretation of language diversity, b) the difficulty in operationalizing factors and dimensions of diversity, c) the absence of

  4. Testing a Model of Teaching for Anxiety and Success for English Language Teaching

    Science.gov (United States)

    Önem, Evrim; Ergenç, Iclal

    2013-01-01

    Much research has shown that there is a negative relationship between high levels of anxiety and success for English language teaching. This paper aimed to test a model of teaching for anxiety and success in English language teaching to affect anxiety and success levels at the same time in a control-experiment group with pre- and post-test study…

  5. MoDeST: A Modelling language for Stochastic Timed Systems

    NARCIS (Netherlands)

    D'Argenio, Pedro R.; Hermanns, Holger; Katoen, Joost-Pieter; Klaren, Ric; de Alfaro, L.; Gilmore, S.

    2001-01-01

    This paper presents a modelling language, called MoDeST, for describing the behaviour of discrete event systems. The language combines conventional programming constructs — such as iteration, alternatives, atomic statements, and exception handling — with means to describe complexsystems in a composi

  6. Assisted Imitation: First Steps in the Seed Model of Language Development

    Science.gov (United States)

    Zukow-Goldring, Patricia

    2012-01-01

    In this article, I present the theoretical and empirical grounding for the SEED ("situated", culturally "embodied", "emergent", "distributed") model of early language development. A fundamental prerequisite to the emergence of language behavior/communication is a hands-on, active understanding of everyday events (, and ). At the heart of this…

  7. Speech-Language Pathologist and General Educator Collaboration: A Model for Tier 2 Service Delivery

    Science.gov (United States)

    Watson, Gina D.; Bellon-Harn, Monica L.

    2014-01-01

    Tier 2 supplemental instruction within a response to intervention framework provides a unique opportunity for developing partnerships between speech-language pathologists and classroom teachers. Speech-language pathologists may participate in Tier 2 instruction via a consultative or collaborative service delivery model depending on district needs.…

  8. Vocabulary and Grammar Knowledge in Second Language Reading Comprehension: A Structural Equation Modeling Study

    Science.gov (United States)

    Zhang, Dongbo

    2012-01-01

    Using structural equation modeling analysis, this study examined the contribution of vocabulary and grammatical knowledge to second language reading comprehension among 190 advanced Chinese English as a foreign language learners. Vocabulary knowledge was measured in both breadth (Vocabulary Levels Test) and depth (Word Associates Test);…

  9. Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition

    Science.gov (United States)

    Kolodny, Oren; Lotem, Arnon; Edelman, Shimon

    2015-01-01

    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…

  10. Computational models of language diversity: a determination of social and individual benefits

    NARCIS (Netherlands)

    Jorna, R.J.; Faber, N.R.; Jorna, R.J.; Liu, K.; Faber, N.R.

    2011-01-01

    In studying benefits and costs of language diversity the use of computer models is rare. There may be various reasons for this situation, e.g. a) the complexity of the interpretation of language diversity, b) the difficulty in operationalizing factors and dimensions of diversity, c) the absence of r

  11. Polychronous Interpretation of Synoptic, a Domain Specific Modeling Language for Embedded Flight-Software

    CERN Document Server

    Besnard, L; Ouy, J; Talpin, J -P; Bodeveix, J -P; Cortier, A; Pantel, M; Strecker, M; Garcia, G; Rugina, A; Buisson, J; Dagnat, F

    2010-01-01

    The SPaCIFY project, which aims at bringing advances in MDE to the satellite flight software industry, advocates a top-down approach built on a domain-specific modeling language named Synoptic. In line with previous approaches to real-time modeling such as Statecharts and Simulink, Synoptic features hierarchical decomposition of application and control modules in synchronous block diagrams and state machines. Its semantics is described in the polychronous model of computation, which is that of the synchronous language Signal.

  12. Avoiding another Green Elephant - A Proposal for the Next Generation HLA based on the Model Driven Architecture

    CERN Document Server

    Tolk, Andreas

    2010-01-01

    When looking through the proceedings of the recent Simulation Interoperability Workshops, a lot of papers - some of them even awarded by the committee - are dealing with alternative concepts outside or beyond the High Level Architecture (HLA): Web Services, the extensible Markup Language (XML), Java Beans, Simple Object Access Protocol (SOAP), etc. Similarly, requirements driven by interoperability issues have resulted in the need to use meta modeling, adaptive models, and common repositories. The use of the Unified Modeling Language (UML) as a model description language is also rapidly becoming a standard. All these concepts have relations to the HLA, but they are not part of it. There seems to be the danger that HLA is overrun by respective developments of the free market and will become irrelevant finally. ... This paper introduces the MDA concept and shows, how the HLA can be integrated to become a standard stub for simulation applications of legacy systems, systems under development, and systems of the f...

  13. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Semantically supporting data discovery, markup and aggregation in the European Marine Observation and Data Network (EMODnet)

    Science.gov (United States)

    Lowry, Roy; Leadbetter, Adam

    2014-05-01

    The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV

  15. Maternal sensitivity and language in early childhood: a test of the transactional model.

    Science.gov (United States)

    Leigh, Patricia; Nievar, M Angela; Nathans, Laura

    2011-08-01

    This study examined the relation between mothers' sensitive responsiveness to their children and the children's expressive language skills during early childhood. Reciprocal effects were tested with dyads of mothers and their children participating in the National Institute of Health and Human Development Study of Early Child Care and Youth Development. Sensitive maternal interactions positively affected children's later expressive language in the second and third years of life. Although maternal sensitivity predicted later language skills in children, children's language did not affect later maternal sensitivity as indicated in a structural equation model. These results do not support the 1975 transactional model of child development of Sameroff and Chandler. A consistent pattern of sensitivity throughout infancy and early childhood indicates the importance of fostering maternal sensitivity in infancy for prevention or remediation of expressive language problems in young children.

  16. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners.

    Science.gov (United States)

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified

  17. What's left in language? Beyond the classical model.

    Science.gov (United States)

    Corballis, Michael C

    2015-11-01

    Until recently it was widely held that language, and its left-hemispheric representation in the brain, were uniquely human, emerging abruptly after the emergence of Homo sapiens. Changing views of language suggest that it was not a recent and sudden development in human evolution, but was adapted from dual-stream circuity long predating hominins, including a system in nonhuman primates specialized for intentional grasping. This system was gradually tailored for skilled manual operations (praxis) and communication. As processing requirements grew more demanding, the neural circuits were increasingly lateralized, with the left hemisphere assuming dominance, at least in the majority of individuals. The trend toward complexity and lateralization was probably accelerated in hominins when bipedalism freed the hands for more complex manufacture and tool use, and more expressive communication. The incorporation of facial and vocal gestures led to the emergence of speech as the dominant mode of language, although gestural communication may have led to generative language before speech became dominant. This scenario provides a more Darwinian perspective on language and its lateralization than has been commonly assumed.

  18. A corpus for mining drug-related knowledge from Twitter chatter: Language models and their utilities

    Directory of Open Access Journals (Sweden)

    Abeed Sarker

    2017-02-01

    Full Text Available In this data article, we present to the data science, natural language processing and public heath communities an unlabeled corpus and a set of language models. We collected the data from Twitter using drug names as keywords, including their common misspelled forms. Using this data, which is rich in drug-related chatter, we developed language models to aid the development of data mining tools and methods in this domain. We generated several models that capture (i distributed word representations and (ii probabilities of n-gram sequences. The data set we are releasing consists of 267,215 Twitter posts made during the four-month period—November, 2014 to February, 2015. The posts mention over 250 drug-related keywords. The language models encapsulate semantic and sequential properties of the texts.

  19. Modelling of internal architecture of kinesin nanomotor as a machine language.

    Science.gov (United States)

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  20. Is it time to Leave Behind the Revised Hierarchical Model of Bilingual Language Processing after Fifteen Years of Service?

    Science.gov (United States)

    Brysbaert, Marc; Duyck, Wouter

    2010-01-01

    The Revised Hierarchical Model (RHM) of bilingual language processing dominates current thinking on bilingual language processing. Recently, basic tenets of the model have been called into question. First, there is little evidence for separate lexicons. Second, there is little evidence for language selective access. Third, the inclusion of…