Full Text Available Natural gas hydrates, as an important potential fuels, flow assurance hazards, and possible factors initiating the submarine geo-hazard and global climate change, have attracted the interest of scientists all over the world. After two centuries of hydrate research, a great amount of scientific data on gas hydrates has been accumulated. Therefore the means to manage, share, and exchange these data have become an urgent task. At present, metadata (Markup Language is recognized as one of the most efficient ways to facilitate data management, storage, integration, exchange, discovery and retrieval. Therefore the CODATA Gas Hydrate Data Task Group proposed and specified Gas Hydrate Markup Language (GHML as an extensible conceptual metadata model to characterize the features of data on gas hydrate. This article introduces the details of modeling portion of GHML.
Qin, X.; Müller, R. D.; Cannon, J.; Landgrebe, T. C. W.; Heine, C.; Watson, R. J.; Turner, M.
Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and
Online & CD-ROM Review, 1997
In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…
Full Text Available Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML, being an extension of the open standard Geography Markup Language (GML, is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
2-4 Figure 2-5. Common Name and Location Definitions ........................................................... 2-4 Data Display Markup Language ...DOM document object model DTD document type definition IRIG Inter-range Instrumentation Group MathML Mathematical Markup Language RCC Range...Moreover, the tendency of T&E is towards a plug-and-play-like data acquisition system that requires standard languages and modules for data displays
Lassila, O; van Harmelen, F; Horrocks, I.; Hendler, J.; McGuinness, DL
The DARPA Agent Markup Language (DAML) program is a United States government sponsored endeavor aimed at providing the foundation for the next web evolution – the semantic web. The program is funding critical research to develop languages, tools and techniques for making considerably more of the
Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N
The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
Goldbaum, Jesse M.
The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.
Currently, a lot of effort is being put on designing complex detectors. A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier. A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment. However, no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files, source code (C/C++/FORTRAN), to XML and database solutions. The XML (Extensible Markup Language) has proven to provide an interesting approach for describing detector geometries, with several different but incompatible XML-based solutions existing. Therefore, interoperability and geometry data exchange among different frameworks is not possible at present. The author introduces a markup language for geometry descriptions. Its aim is to define a common approach for sharing and exchanging of geometry description data. Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML
Manion Frank J
Full Text Available Abstract Background Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. Methods We used the MagicDraw modelling tool to design a UML model (Flow-OM according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML. We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. Results The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow, which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets
Watanabe, Leandro; Myers, Chris J
The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.
Mohammed Amasha; Salem Alkhalaf
This study examines the use of Facebook Markup Language (FBML) to design an e-learning model to facilitate teaching and learning in an academic setting. The qualitative research study presents a case study on how, Facebook is used to support collaborative activities in higher education. We used FBML to design an e-learning model called processes for e-learning resources in the Specialist Learning Resources Diploma (SLRD) program. Two groups drawn from the SLRD program were used; First were th...
Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally
A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.
Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui
One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible; consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here. The concept of a Fuzzy Control that is conceived and modeled in terms...
.... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...
.... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...
Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)
Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.
Sagan, D.; Forster, M.; /Cornell U., LNS; Bates, D.A.; /LBL, Berkeley; Wolski, A.; /Liverpool U. /Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; /CERN; Walker, N.J.; /DESY; Larrieu, T.; Roblin, Y.; /Jefferson Lab; Pelaia, T.; /Oak Ridge; Tenenbaum, P.; Woodley, M.; /SLAC; Reiche, S.; /UCLA
A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.
Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris
The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.
Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML
Saadawi, Gilan M; Harrison, James H
Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.
Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay
An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.
Franke, K.Y.; Guyon, I.; Schomaker, L.R.B.; Vuurpijl, L.G.
WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains
Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.
WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains
Brian, Geoffrey J.; Jackson, E. Bruce
The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.
Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard
The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.
...-S), the Web Ontology Query Language (OWL-QL) and Semantic Web Rule Language (SWRL) W3C submissions. This report contains the evolution of these markup languages as well as a discussion of semantic query languages, proof and explanation...
Wernecke, J.; Bailey, J. E.
The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.
Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Release 2 of Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. No design changes have been made to the description of models between Release 1 and Release 2; changes are restricted to the format of annotations, the correction of errata and the addition of clarifications. Other materials and software are available from the SBML project website at http://sbml.org/.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical formof the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data
Valcic, L.; Bailey, J. E.; Dehn, J.
Over the last five years there has been a proliferation in the development of virtual globe programs. Programs such as Google Earth, NASA World Wind, SkylineGlobe, Geofusion and ArcGIS Explorer each have their own strengths and weaknesses, and whether a market will remain for all tools will be determined by user application. This market is currently led by Google Earth, the release of which on 28 Jun 2005 helped spark a revolution in virtual globe technology, by bringing it into the public view and imagination. Many would argue that such a revolution was due, but it was certainly aided by the world-wide name recognition of Google, and the creation of a user-friendly interface. Google Earth is an updated version of a program originally called Earth Viewer, which was developed by Keyhole Inc. It was renamed after Google purchased Keyhole and their technology in 2001. In order to manage the geospatial data within these viewers, the developers created a new XML-based (Extensible Markup Language) called Keyhole Markup Language (KML). Through manipulation of KML scientists are finding increasingly creative and more visually appealing methods to display and manipulate their data. A measure of the success of Google Earth and KML is demonstrated by the fact that other virtual globes are now including various levels of KML compatibility. This presentation will display examples of how KML has been applied to scientific data. It will offer a forum for questions pertaining to how KML can be applied to a user's dataset. Interested parties are encouraged to bring examples of projects under development or being planned.
Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung
Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.
Babaie, H.; Babaei, A.
The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and
Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.
Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.
Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.
Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas
Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.
This document provides the guidance necessary to transmit to DOE's Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI
This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.
Full Text Available The lifecycle of Web-based applications is characterized by frequent changes to content, user interface, and functionality. Updating content, improving the services provided to users, drives further development of a Web-based application. The major goal for the success of a Web-based application becomes therefore its evolution. Though, development and maintenance of Web-based applications suffers from the underlying document-based implementation model. A disciplined evolution of Web based applications requires the application of software engineering practice for systematic further development and reuse of software artifacts. In this contribution we suggest to adopt the component paradigm to development and evolution of Web-based applications. The approach is based on a dedicated component technology and component-software architecture. It allows abstracting from many technical aspects related to the Web as an application platform by introducing domain specific markup languages. These languages allow the description of services, which represent domain components in our Web-component-software approach. Domain experts with limited knowledge of technical details can therefore describe application functionality and the evolution of orthogonal aspects of the application can be de-coupled. The whole approach is based on XML to achieve the necessary standardization and economic efficiency for the use in real world projects.
Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea
The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.
Pound, Michael P.; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Leitner, Daniel; Meunier, Félicien; Pridmore, Tony P.; Schnepf, Andrea
The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. PMID:25614065
Sycara, Katia P
CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...
Bajlak Ch. Oorzhak
Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.
Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio
Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.
Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.
Management Data Initiative by Frederick S Brundick Approved for public release; distribution is unlimited. NOTICES Disclaimers The findings in this report...Schema in the Global Force Management Data Initiative by Frederick S Brundick Computing and Information Sciences Directorate, ARL Approved for public...Technical Report Producing a Data Dictionary from an Extensible Markup Language (XML) Schema in the Global Force Management Data Initiative Frederick S
de Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.
Background: Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library an...
Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.
The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes
Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver
The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).
De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.
Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XML files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.
Wendt, G J
This presentation describes the technical details of implementing a process to create digital teaching files stressing the use of commercial off-the-shelf (COTS) software and hardware and standard hypertext markup language (HTML) to keep development costs to a minimum.
De Loecker, Jan; Warzynski, Frederic Michel Patrick
estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge......, on average, higher markups and that markups increase upon export entry....
Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Cáceres, Jesús; Somolinos, Roberto; Pascual, Mario; Martínez, Ignacio; Salvador, Carlos H; Monteagudo, José Luis
The objective of this paper is to introduce a new language called ccML, designed to provide convenient pragmatic information to applications using the ISO/EN13606 reference model (RM), such as electronic health record (EHR) extracts editors. EHR extracts are presently built using the syntactic and semantic information provided in the RM and constrained by archetypes. The ccML extra information enables the automation of the medico-legal context information edition, which is over 70% of the total in an extract, without modifying the RM information. ccML is defined using a W3C XML schema file. Valid ccML files complement the RM with additional pragmatics information. The ccML language grammar is defined using formal language theory as a single-type tree grammar. The new language is tested using an EHR extracts editor application as proof-of-concept system. Seven ccML PVCodes (predefined value codes) are introduced in this grammar to cope with different realistic EHR edition situations. These seven PVCodes have different interpretation strategies, from direct look up in the ccML file itself, to more complex searches in archetypes or system precomputation. The possibility to declare generic types in ccML gives rise to ambiguity during interpretation. The criterion used to overcome ambiguity is that specificity should prevail over generality. The opposite would make the individual specific element declarations useless. A new mark-up language ccML is introduced that opens up the possibility of providing applications using the ISO/EN13606 RM with the necessary pragmatics information to be practical and realistic.
In this paper we introduce our idea abouthow to create a virtual reality system, wherein thefootball teams, or in our terminology, the avatars of thefootball players can play a high number of footballmatches. Based on our former experience in mobilesoccer gaming we suggest developing an appropriatemarkup language to describe the football players, thecoaches and the matches themselves. We review ourexperience in question and in present work the targetsshall be set by suggesting the development...
de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D
Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int
Van Nguyen, A; Avrin, D E; Tellis, W M; Andriole, K P; Arenson, R L
Common object request broker architecture (CORBA) is a method for invoking distributed objects across a network. There has been some activity in applying this software technology to Digital Imaging and Communications in Medicine (DICOM), but no documented demonstration of how this would actually work. We report a CORBA demonstration that is functionally equivalent and in some ways superior to the DICOM communication protocol. In addition, in and outside of medicine, there is great interest in the use of extensible markup language (XML) to provide interoperation between databases. An example implementation of the DICOM data structure in XML will also be demonstrated. Using Visibroker ORB from Inprise (Scotts Valley, CA), a test bed was developed to simulate the principle DICOM operations: store, query, and retrieve (SQR). SQR is the most common interaction between a modality device application entity (AE) such as a computed tomography (CT) scanner, and a storage component, as well as between a storage component and a workstation. The storage of a CT study by invoking one of several storage objects residing on a network was simulated and demonstrated. In addition, XML database descriptors were used to facilitate the transfer of DICOM header information between independent databases. CORBA is demonstrated to have great potential for the next version of DICOM. It can provide redundant protection against single points of failure. XML appears to be an excellent method of providing interaction between separate databases managing the DICOM information object model, and may therefore eliminate the common use of proprietary client-server databases in commercial implementations of picture archiving and communication systems (PACS).
Bellone, Flora; Musso, Patrick; Nesta, Lionel
In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2...
Frank, M S; Schultz, T; Dreyer, K
To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies. Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems. Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently, the transfer of hundreds of educational cases and thematic presentations between the systems can now be accomplished in a matter of minutes. The use of algorithmic transforms results in nearly 100
PENGUKURAN KINERJA BEBERAPA SISTEM BASIS DATA RELASIONAL DENGAN KEMAMPUAN MENYIMPAN DATA BERFORMAT GML (GEOGRAPHY MARKUP LANGUAGE YANG DAPAT DIGUNAKAN UNTUK MENDASARI APLIKASI-APLIKASI SISTEM INFORMASI GEOGRAFIS
Full Text Available If we want to represent spatial data to user using GIS (Geographical Information System applications, we have 2 choices about the underlying database, that is general RDBMS (Relational Database Management System for saving general spatial data (number, char, varchar, etc., or saving spatial data in GML (Geography Markup Language format. (GML is an another XML’s special vocabulary for spatial data. If we choose GML for saving spatial data, we also have 2 choices, that is saving spatial data in XML Enabled Database (relational databases that can be use for saving XML data or we can use Native XML Database (NXD, that is special databases that can be use for saving XML data. In this paper, we try to make performance comparison for several XML Enabled Database when we do GML’s CRUD (Create-Read-Update-Delete operations to these databases. On the other side, we also want to see flexibility of XML Enabled Database from programmers view.
National Aeronautics and Space Administration — The KML documentation standard provides a solution for imagery integration into mapping tools that utilize support the KML standard, specifically Google Earth. Using...
Meystre, Stephanie; Haug, Peter J
We are developing tools to help maintain a complete, accurate and timely problem list within a general purpose Electronic Medical Record system. As a part of this project, we have designed a system to automatically retrieve medical problems from free-text documents. Here we describe an information model based on XML (eXtensible Markup Language) and compliant with the CDA (Clinical Document Architecture). This model is used to ease the exchange of clinical data between the Natural Language Understanding application that retrieves potential problems from narrative document, and the problem list management application.
eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) ...
National Aeronautics and Space Administration — The MPC (Model Process Control) language enables the capture, communication and preservation of a simulation instance, with sufficient detail that it can be...
De Loecker, Jan; Warzynski, Frederic
We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...... see these results as a first step in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....
The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.
De Loecker, Jan; Warzynski, Frederic
and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....
log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...
Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.
Graça, Andreia Sofia Dinis
O Business Intelligence representa um conjunto de técnicas e ferramentas, que são fundamentais na melhoria do valor quantitativo e qualitativo da informação proveniente de um grande volume de dados, auxiliando na execução e implementação de estratégias de negócios, determinando, desta forma, uma grande vantagem no mercado global e no universo do empreendedorismo. As técnicas referidas anteriormente são geralmente utilizadas por consultores, sendo neste contexto, que serão abordadas as nece...
Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo
We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,
Costa, Luís; Palma, Nuno
In this note we show that the claim from Chen et al (2005) that their model generates an endogenous markup is incorrect. This is not only a nomenclature issue: using the �fixed markup which we show to be the only one consistent with the structure of the model implies the main conclusions in that paper do not hold. In particular, government expenditure in infrastructure cannot affect the business cycle in this model by deliberately changing the market structure of the economy.
Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M
Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p markets was associated with only a 62.9% higher markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p market concentration and markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.
van der Maas, Arnoud A.F.; Ter Hofstede, Arthur H.M.; Ten Hoopen, A. Johannes
Objective: The development of tailor-made domain-specific modeling languages is sometimes desirable in medical informatics. Naturally, the development of such languages should be guided. The purpose of this article is to introduce a set of requirements for such languages and show their application in analyzing and comparing existing modeling languages. Design: The requirements arise from the practical experience of the authors and others in the development of modeling languages in both general informatics and medical informatics. The requirements initially emerged from the analysis of information modeling techniques. The requirements are designed to be orthogonal, i.e., one requirement can be violated without violation of the others. Results: The proposed requirements for any modeling language are that it be “formal” with regard to syntax and semantics, “conceptual,” “expressive,” “comprehensible,” “suitable,” and “executable.” The requirements are illustrated using both the medical logic modules of the Arden Syntax as a running example and selected examples from other modeling languages. Conclusion: Activity diagrams of the Unified Modeling Language, task structures for work flows, and Petri nets are discussed with regard to the list of requirements, and various tradeoffs are thus made explicit. It is concluded that this set of requirements has the potential to play a vital role in both the evaluation of existing domain-specific languages and the development of new ones. PMID:11230383
Full Text Available 圖書館網站是圖書館服務的延伸，圖書館網頁之正確性與否必然關係著資訊倫理中之可及性及正確性，因此突顯出圖書館網頁是否符合網頁設計標準規範對讀者服務之重要性，而其中網頁標記語言為網頁設計標準規範的一種，透過圖書館網頁正確性檢測可清楚揭露網頁符合標準規範的程度，以協助圖書館開發或維護符合標準規範之網頁。本研究利用W3C所提供之網頁標記語言檢測服務（Markup Validation Service）檢測158所大專院校圖書館與 4間公共圖書館網站首頁，藉此探討國內公共圖書館與大專院校圖書館網頁標記語言正確性之現況。結果發現大專院校圖書館與公共圖書館網站首頁標記語言正確性之檢測通過率為0，且錯誤數超過100個以上者有1/3強，顯示國內圖書館網頁標記語言之正確性亟待改善。本研究亦對於網頁檢測發生錯誤且W3C無修改建議之處，以範例方式提出解決之建議，供圖書館製作維護網頁之參考。 Library website is the extended service of library; the correctness of library webpage is certainly related to accessibility and correctness of information ethics and also will manifest the importance of the compliance of library webpage with webpage design standard for reader service. Markup validation service is one kind among webpage design standard, and testing the correctness of library webpage can reveal the extent of the compliance of webpage with the standard in order to assist libraries to develop or maintain webpage conforming to the standard. This study utilized markup validation service provided by W3C to test the correctness of code of library homepage of 158 colleges and 24 public libraries in order to investigate the current status of the correctness of markup on webpage on the websites of domestic college and public libraries. The results showed that 0% of homepage markup
Sharp, J.K. [Sandia National Labs., Albuquerque, NM (United States)
This seminar describes a process and methodology that uses structured natural language to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this natural language approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, project management, budget reviews, tax, laws, machine function).
Lallement, Christophe; Haiech, Jacques
The article deals with BB-SPICE (SPICE for Biochemical and Biological Systems), an extension of the famous Simulation Program with Integrated Circuit Emphasis (SPICE). BB-SPICE environment is composed of three modules: a new textual and compact description formalism for biological systems, a converter that handles this description and generates the SPICE netlist of the equivalent electronic circuit and NGSPICE which is an open-source SPICE simulator. In addition, the environment provides back and forth interfaces with SBML (System Biology Markup Language), a very common description language used in systems biology. BB-SPICE has been developed in order to bridge the gap between the simulation of biological systems on the one hand and electronics circuits on the other hand. Thus, it is suitable for applications at the interface between both domains, such as development of design tools for synthetic biology and for the virtual prototyping of biosensors and lab-on-chip. Simulation results obtained with BB-SPICE and COPASI (an open-source software used for the simulation of biochemical systems) have been compared on a benchmark of models commonly used in systems biology. Results are in accordance from a quantitative viewpoint but BB-SPICE outclasses COPASI by 1 to 3 orders of magnitude regarding the computation time. Moreover, as our software is based on NGSPICE, it could take profit of incoming updates such as the GPU implementation, of the coupling with powerful analysis and verification tools or of the integration in design automation tools (synthetic biology). PMID:28787027
Ten Raa, T.
Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear
Viskit Visual Kit VRML Virtual Reality Modeling Language W3C Web3D Consortium X3D Extensible 3D Graphics XML Extensible Markup Language XSLT...successor to the Virtual Reality Modeling Language ( VRML ). X3D features extensions to VRML (e.g., Humanoid Animation (HANIM), NURBS (Non-uniform
Gong, Tao; Shuai, Lan
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.
.... This work synthesizes air planning data messages, Air Tasking Order(ATO) data messages, written in XML and based upon the USMTF standard, and displays these messages as a three-dimensional (3D...
Full Text Available The article deals with BB-SPICE (SPICE for Biochemical and Biological Systems, an extension of the famous Simulation Program with Integrated Circuit Emphasis (SPICE. BB-SPICE environment is composed of three modules: a new textual and compact description formalism for biological systems, a converter that handles this description and generates the SPICE netlist of the equivalent electronic circuit and NGSPICE which is an open-source SPICE simulator. In addition, the environment provides back and forth interfaces with SBML (System Biology Markup Language, a very common description language used in systems biology. BB-SPICE has been developed in order to bridge the gap between the simulation of biological systems on the one hand and electronics circuits on the other hand. Thus, it is suitable for applications at the interface between both domains, such as development of design tools for synthetic biology and for the virtual prototyping of biosensors and lab-on-chip. Simulation results obtained with BB-SPICE and COPASI (an open-source software used for the simulation of biochemical systems have been compared on a benchmark of models commonly used in systems biology. Results are in accordance from a quantitative viewpoint but BB-SPICE outclasses COPASI by 1 to 3 orders of magnitude regarding the computation time. Moreover, as our software is based on NGSPICE, it could take profit of incoming updates such as the GPU implementation, of the coupling with powerful analysis and verification tools or of the integration in design automation tools (synthetic biology.
Any language teacher who has gone through some kind of training program for the teaching of English should be familiar with various specific language teaching models that constitute the core of the training process. A language teaching model is a guide that helps the trainee to sequence the activities designed for the expectations and needs of learners in a lesson. This paper reviews the common language teaching models in teacher training programs (PPP, OHE, III, TTT, TBLT, ESA, ARC) and disc...
Cincinnati, Ohio s TechnoSoft Inc. is a leading provider of object-oriented modeling and simulation technology used for commercial and defense applications. With funding from Small Business Innovation Research (SBIR) contracts issued by Langley Research Center, the company continued development on its adaptive modeling language, or AML, originally created for the U.S. Air Force. TechnoSoft then created what is now known as its Integrated Design and Engineering Analysis Environment, or IDEA, which can be used to design a variety of vehicles and machinery. IDEA's customers include clients in green industries, such as designers for power plant exhaust filtration systems and wind turbines.
Domain-specific markup languages and descriptive metadata: their functions in scientific resource discoveryLinguagens de marcação específicas por domínio e metadados descritivos: funções para a descoberta de recursos científicos
Marcia Lei Zeng
Full Text Available While metadata has been a strong focus within information professionals‟ publications, projects, and initiatives during the last two decades, a significant number of domain-specific markup languages have also been developing on a parallel path at the same rate as metadata standards; yet, they do not receive comparable attention. This essay discusses the functions of these two kinds of approaches in scientific resource discovery and points out their potential complementary roles through appropriate interoperability approaches.Enquanto os metadados tiveram grande foco em publicações, projetos e iniciativas dos profissionais da informação durante as últimas duas décadas, um número significativo de linguagens de marcação específicas por domínio também se desenvolveram paralelamente a uma taxa equivalente aos padrões de metadados; mas ainda não recebem atenção comparável. Esse artigo discute as funções desses dois tipos de abordagens na descoberta de recursos científicos e aponta papéis potenciais e complementares por meio de abordagens de interoperabilidade apropriadas.
This paper summarizes and reviews the literature regarding language learning strategies and it's training model, pointing out the significance of language learning strategies to EFL learners and an applicable and effective language learning strategies training model, which is beneficial both to EFL learners and instructors, is badly needed.
Rangarajan, K; Mukund, M
A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con
Demirel, Doga; Yu, Alexander; Baer-Cooper, Seth; Halic, Tansel; Bayrak, Coskun
This paper presents the Generative Anatomy Modeling Language (GAML) for generating variation of 3D virtual human anatomy in real-time. This framework provides a set of operators for modification of a reference base 3D anatomy. The perturbation of the 3D models is satisfied with nonlinear geometry constraints to create an authentic human anatomy. GAML was used to create 3D difficult anatomical scenarios for virtual simulation of airway management techniques such as Endotracheal Intubation (ETI) and Cricothyroidotomy (CCT). Difficult scenarios for each technique were defined and the model variations procedurally created with GAML. This study presents details of the GAML design, set of operators, types of constraints. Cases of CCT and ETI difficulty were generated and confirmed by expert surgeons. Execution performance pertaining to an increasing complexity of constraints using nonlinear programming was in real-time execution. Copyright © 2017 John Wiley & Sons, Ltd.
Gong, Tao; Shuai, Lan; Zhang, Menghan
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina
Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... models. Our language, Featherweight VML, has several distinctive features. Its architecture and operations are inspired by the recently proposed Common Variability Language (CVL). Its semantics is considerably simpler than that of CVL, while remaining confluent (unlike CVL). We simplify complex......, which makes it suitable to serve as a specification for implementations of trustworthy variant derivation. Featherweight VML offers insights in the execution of other variability modeling languages such as the Orthogonal Variability Model and Delta Modeling. To the best of our knowledge...
Computational reflection is a well known technique applied in many existing programming languages ranging from functional to object-oriented languages. In this paper we study the possibilities and benefits of introducing and using reflection in rule-based model transformation languages. The paper
Schlette, Christian; Roßmann, Jürgen
More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards for proj......More and more of our indoor/outdoor environments are available as 3D digital models. In particular, digital models such as the CityGML (City Geography Markup Language) format for cities and the BIM (Building Information Modeling) methodology for buildings are becoming important standards...
The paper surveys recent research on language evolution, focusing in particular on models of cultural evolution and how they are being developed and tested using agent-based computational simulations and robotic experiments. The key challenges for evolutionary theories of language are outlined and some example results are discussed, highlighting models explaining how linguistic conventions get shared, how conceptual frameworks get coordinated through language, and how hierarchical structure could emerge. The main conclusion of the paper is that cultural evolution is a much more powerful process that usually assumed, implying that less innate structures or biases are required and consequently that human language evolution has to rely less on genetic evolution.
Merity, Stephen; Keskar, Nitish Shirish; Socher, Richard
Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent r...
Full Text Available nerating huge amounts of data, which typically must be shared amongst many collaborators and researchers. To... store and use such data efficiently, it is paramount that biomedical researchers
Full Text Available description form. It was an outcome of the conference that we proposed the initial submission of PML to Life...ference was fruitful, we decided such international conference to be executed conti...st for proposal for the standardization of genomic variation data description form in January 2004. Because the 1st international con
for Advanced Graphical Environments SMAL Savage Modeling Analysis Language VRML Virtual Reality Markup Language SRTM Shuttle Radar Topography...the successor to the Virtual Reality Modeling Language ( VRML ). X3D features extensions to VRML (e.g. Humanoid Animation, NURBS, GeoVRML etc...formats as for example, it can export into VRML format so the model can be exploited and modeled in other 3D editing tools such as X3D-Edit . Its
Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat
We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.
Waning, Brenda; Maddix, Jason; Soucy, Lyne
Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with ups. Medicine mark-ups needed for sustainability were greater than originally envisioned by network administration. In 2005, 55%, 35%, and 10% of the network's top 50 products revealed mark-ups of 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of ups > 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy
Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid
Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.
The Virtual Reality Modeling Language (VRML) and Java provide a standardized, portable and platform-independent way to render dynamic, interactive 3D scenes across the Internet. Integrating two powerful and portable software languages provides interactive 3D graphics plus complete programming capabilities plus network access. Intended for programmers and scene authors, this paper provides a VRML overview, synopsizes the open development history of the specification, provdes a condensed summ...
The purpose of this research is to explore the interrelations of language, context and text and the implications for language teaching. In this investigation, the children were recorded and their conversations were transcribed. The transcription of the conversation and the language of advertisement were analysed using the Halliday and the Jacobson models of language functions. How the functional model was used in the assessment of texts and how were language, context and text were interrelate...
Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.
Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong
In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.
The ability of using mother tongue has been possessed by every child. They can master the language without getting specific education. In a short time a child has mastered the language to communicate with others. There are many theories of language acquisition. One of them that still exists is The Native Model of Language Acquisition (LAD). This theory was pioneered by Noam Chomsky. In this language naturally. This ability develops automatically when the language is used is Language Acquisiti...
M. Montone (Maurizio)
textabstractI find that risk-averse bookmakers should charge a higher mark-up on events with a greater number of outcomes. Also, they should dynamically adjust their odds to reduce profit volatility, thus generating arbitrage opportunities. An empirical analysis of the online betting market supports
This paper takes new look on language and knowledge modelling for corpus linguistics. Using ideas of Chaitin, a line of argument is made against language/knowledge separation in Natural Language Processing. A simplistic model, that generalises approaches to language and knowledge, is proposed. One of hypothetical consequences of this model is Strong AI.
Fairmichael, Fintan; Kiniry, Joseph Roland
that their relationship is clearly and precisely defined. This paper details a formal relationship between the textual and graph- ical forms of a high-level modelling language called the Business Object Notation (BON). We describe the semantics of the graphical and textual representations and the relationship that holds......Many modelling languages have both a textual and a graph- ical form. The relationship between these two forms ought to be clear and concrete, but is instead commonly underspecified, weak, and infor- mal. Further, processes and tool support for modelling often do not treat both forms as first...... between them. We also formally define a view on an underlying model as an extraction func- tion, and model diffs as a means of tracking changes as a model evolves. This theoretical foundation provides a means by which tools guarantee consistency between textual and graphical notations, as well shows how...
Fairmichael, Fintan; Kiniry, Joseph Roland
that their relationship is clearly and precisely defined. This paper details a formal relationship between the textual and graph- ical forms of a high-level modelling language called the Business Object Notation (BON). We describe the semantics of the graphical and textual representations and the relationship that holds......Many modelling languages have both a textual and a graph- ical form. The relationship between these two forms ought to be clear and concrete, but is instead commonly underspecified, weak, and infor- mal. Further, processes and tool support for modelling often do not treat both forms as first...... between them. We also formally define a view on an underlying model as an extraction func- tion, and model diffs as a means of tracking changes as a model evolves. This theoretical foundation provides a means by which tools guarantee consistency between textual and graphical notations, as well shows how...
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool
Lewis, Shevaun; Phillips, Colin
We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…
Cattani, Gian Luca; Winskel, Glynn
caused through traditional models not always possessing the cartesian liftings, used in the breakdown of process operations, are side stepped. The abstract results are applied to show that hereditary history-preserving bisimulation is a congruence for CCS-like languages to which is added a refinement...... operator on event structures as proposed by van Glabbeek and Goltz....
are making the sparse training data problem less serious for certain domains, such as ARPA’s Wall Street Journal corpus, which is part of the 305...our memory calculations. Using a 58,000 word dictionary and 45 million words of Wall Street Journal training data (1992 - 1994), the memory...and used to create models of the same size. The first data set consists of 45.3 million words of Wall Street Journal data (1992 - 1994), the same data
Ntongieh, Njwe Amah Eyovi
This paper investigates Language models with an emphasis on an appraisal of the Competence Based Language Teaching Model (CBLT) employed in the teaching and learning of English language in Cameroon. Research endeavours at various levels combined with cumulative deficiencies experienced over the years have propelled educational policy makers to…
de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D
This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the
facilities are inadequate. The Visual Model Query Language (VMQL) is a novel approach that uses the respective modeling language of the source model as the query language, too. The semantics of VMQL is defined formally based on graphs, so that query execution can be defined as graph matching. VMQL has been...... applied to several visual modeling languages, implemented, and validated in small case studies, and several controlled experiments....
Fernando, Chrisantha; Valijärvi, Riitta-Liisa; Goldstein, Richard A
Why and how have languages died out? We have devised a mathematical model to help us understand how languages go extinct. We use the model to ask whether language extinction can be prevented in the future and why it may have occurred in the past. A growing number of mathematical models of language dynamics have been developed to study the conditions for language coexistence and death, yet their phenomenological approach compromises their ability to influence language revitalization policy. In contrast, here we model the mechanisms underlying language competition and look at how these mechanisms are influenced by specific language revitalization interventions, namely, private interventions to raise the status of the language and thus promote language learning at home, public interventions to increase the use of the minority language, and explicit teaching of the minority language in schools. Our model reveals that it is possible to preserve a minority language but that continued long-term interventions will likely be necessary. We identify the parameters that determine which interventions work best under certain linguistic and societal circumstances. In this way the efficacy of interventions of various types can be identified and predicted. Although there are qualitative arguments for these parameter values (e.g., the responsiveness of children to learning a language as a function of the proportion of conversations heard in that language, the relative importance of conversations heard in the family and elsewhere, and the amplification of spoken to heard conversations of the high-status language because of the media), extensive quantitative data are lacking in this field. We propose a way to measure these parameters, allowing our model, as well as others models in the field, to be validated.
G?mez, Harold F.; Hucka, Michael; Keating, Sarah M.; Nudelman, German; Iber, Dagmar; Sealfon, Stuart C.
Summary: MATLAB is popular in biological research for creating and simulating models that use ordinary differential equations (ODEs). However, sharing or using these models outside of MATLAB is often problematic. A community standard such as Systems Biology Markup Language (SBML) can serve as a neutral exchange format, but translating models from MATLAB to SBML can be challenging?especially for legacy models not written with translation in mind. We developed MOCCASIN (Model ODE Converter for ...
Full Text Available The ability of using mother tongue has been possessed by every child. They can master the language without getting specific education. In a short time a child has mastered the language to communicate with others. There are many theories of language acquisition. One of them that still exists is The Native Model of Language Acquisition (LAD. This theory was pioneered by Noam Chomsky. In this language naturally. This ability develops automatically when the language is used is Language Acquisition Device (LAD. LAD constitutes a hypothesis of feature of grammatical rules used progressively by a child in accordance with his psychological development.
Fukś, Henryk; Farzad, Babak; Cao, Yi
Inflection graphs are highly complex networks representing relationships between inflectional forms of words in human languages. For so-called synthetic languages, such as Latin or Polish, they have particularly interesting structure due to the abundance of inflectional forms. We construct the simplest form of inflection graphs, namely a bipartite graph in which one group of vertices corresponds to dictionary headwords and the other group to inflected forms encountered in a given text. We, then, study projection of this graph on the set of headwords. The projection decomposes into a large number of connected components, to be called word groups. Distribution of sizes of word group exhibits some remarkable properties, resembling cluster distribution in a lattice percolation near the critical point. We propose a simple model which produces graphs of this type, reproducing the desired component distribution and other topological features.
López, Francesca; Scanlan, Martin; Gorman, Brenda K.
This study investigated the degree to which the quality of teachers' language modeling contributed to reading achievement for 995 students, both English language learners and native English speakers, across developmental bilingual, dual language, and monolingual English classrooms. Covariates included prior reading achievement, gender, eligibility…
Full Text Available Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic PDP architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development.
Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto
We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...
Hiemstra, Djoerd; de Jong, Franciska M.G.
Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
Tare, Medha; Gelman, Susan A.
Parental input represents an important source of language socialization. Particularly in bilingual contexts, parents may model pragmatic language use and metalinguistic strategies to highlight language differences. The present study examines multiparty interactions involving 28 bilingual English- and Marathi-speaking parent-child pairs in the…
Wieringa, Roelf J.
Version 2 of a language (CMSL) to specify conceptual models is defined. CMSL consists of two parts, the value specification language VSL and the object spercification language OSL. There is a formal semantics and an inference system for CMSL but research on this still continues. A method for
Computational reflection is a well-known technique applied in many existing programming languages ranging from functional to object-oriented languages. In this paper we study the possibilities and benefits of introducing and using reflection in a rule-based model transformation language. The paper
Thomas, Michael S C; Forrester, Neil A; Ronald, Angelica
Socioeconomic status (SES) is an important environmental predictor of language and cognitive development, but the causal pathways by which it operates are unclear. We used a computational model of development to explore the adequacy of manipulations of environmental information to simulate SES effects in English past-tense acquisition, in a data set provided by Bishop (2005). To our knowledge, this is the first application of computational models of development to SES. The simulations addressed 3 new challenges: (a) to combine models of development and individual differences in a single framework, (b) to expand modeling to the population level, and (c) to implement both environmental and genetic/intrinsic sources of individual differences. The model succeeded in capturing the qualitative patterns of regularity effects in both population performance and the predictive power of SES that were observed in the empirical data. The model suggested that the empirical data are best captured by relatively wider variation in learning abilities and relatively narrow variation in (and good quality of) environmental information. There were shortcomings in the model's quantitative fit, which are discussed. The model made several novel predictions, with respect to the influence of SES on delay versus giftedness, the change of SES effects over development, and the influence of SES on children of different ability levels (gene-environment interactions). The first of these predictions was that SES should reliably predict gifted performance in children but not delayed performance, and the prediction was supported by the Bishop data set. Finally, the model demonstrated limits on the inferences that can be drawn about developmental mechanisms on the basis of data from individual differences. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel
The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.
Armeni, Kristijan; Willems, Roel M; Frank, Stefan L
Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Huang, Chen; Ding, Xiaoqing; Chen, Yan
The language model design and implementation issue is researched in this paper. Different from previous research, we want to emphasize the importance of n-gram models based on words in the study of language model. We build up a word based language model using the toolkit of SRILM and implement it for contextual language processing on Chinese documents. A modified Absolute Discount smoothing algorithm is proposed to reduce the perplexity of the language model. The word based language model improves the performance of post-processing of online handwritten character recognition system compared with the character based language model, but it also increases computation and storage cost greatly. Besides quantizing the model data non-uniformly, we design a new tree storage structure to compress the model size, which leads to an increase in searching efficiency as well. We illustrate the set of approaches on a test corpus of recognition results of online handwritten Chinese characters, and propose a modified confidence measure for recognition candidate characters to get their accurate posterior probabilities while reducing the complexity. The weighted combination of linguistic knowledge and candidate confidence information proves successful in this paper and can be further developed to achieve improvements in recognition accuracy.
Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)
Chien, Jen-Tzung; Ku, Yuan-Chu
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Banegas, Darío Luis
In the last decade, there has been a major interest in content-based instruction (CBI) and content and language integrated learning (CLIL). These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. In this paper, I…
Language (MOCQL), an experimental declarative textual language to express queries (and constraints) on models. We introduce MOCQL by examples and its grammar, evaluate its usability by means of controlled experiments, and find that modelers perform better and experience less cognitive load when working...
Al-Sibahi, Ahmad Salim
, it is not immediately obvious what their computational expressiveness is. In this paper we present an analysis that clarifies the computational expressiveness of a large number of model transformation languages. The analysis confirms the folklore for all model transformation languages, except the bidirectional ones...
This paper describes the use of language models in various phases of Tamil speech recognition system for improving its performance. In this work, the language models are applied at various levels of speech recognition such as segmentation phase, recognition phase and the syllable and word level error correction phase.
Zhang, Menghan; Gong, Tao
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Li, Ping; Zhao, Xiaowei
Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061
Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel
Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently, these langua......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently......, these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we...... hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following...
Kim, Woosung; Khudanpur, Sanjeev
.... We achieve this through an extension of the method of lexical triggers to the cross-language problem, and by developing a likelihoodbased adaptation scheme for combining a trigger model with an N-gram model...
A framework is proposed for an integrated course in which knowledge of a language is consciously related to the processes of interpersonal communication and the cultural aspects of marketing and negotiation. (Editor)
Guapacha Chamorro, Maria Eugenia; Benavidez Paz, Luis Humberto
This paper reports an action-research study on language learning strategies in tertiary education at a Colombian university. The study aimed at improving the English language performance and language learning strategies use of 33 first-year pre-service language teachers by combining elements from two models: the cognitive academic language…
where one’s mother tongue is usually not the language of education and government. . Even though English and the other well-provisioned languages are... software system that is frequently tweaked and rebuilt. Datasets are provided both as single aggregated files (one per resource type), and as many...provide information from both. However, because Ethnologue GIS data may not be redistributed, we locate and supply the nearest populated place instead
Zarrin, Bahram; Baumeister, Hubert
In this paper, we propose an integrated framework that can be used by DSL designers to implement their desired graphical domain-specific languages. This framework relies on Microsoft DSL Tools, a meta-modeling framework to build graphical domain-specific languages, and an extension of ForSpec, a ...... language to define their semantics. Integrating these technologies under the umbrella of Microsoft Visual Studio IDE allows DSL designers to utilize a single development environment for developing their desired domain-specific languages....
VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory
An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.
An integrated model of instruction in language and culture uses a sequential method of discovering sensation, perception, concept, and principle to develop self-analysis skills in students. When planning activities for learning a language and developing cultural understanding, teachers might follow a sequence such as the following: introduce…
Young, Carl A., Ed.; Moran, Clarice M., Ed.
The flipped classroom method, particularly when used with digital video, has recently attracted many supporters within the education field. Now more than ever, language arts educators can benefit tremendously from incorporating flipped classroom techniques into their curriculum. "Applying the Flipped Classroom Model to English Language Arts…
Burr Settles; Brendan Meeder
We present half-life regression (HLR), a novel model for spaced repetition practice with applications to second language acquisition. HLR combines psycholinguistic theory with modern machine learning techniques, indirectly estimating the “halflife” of a word or concept in a student’s long-term memory. We use data from Duolingo — a popular online language learning application — to fit HLR models, reducing error by 45%+ compared to several baselines at predicting student recall rates. HLR model...
Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we
Goodrich, J Marc; Lonigan, Christopher J
According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying proficiency model for the early literacy skills of Spanish-speaking language-minority children using confirmatory factor analysis. Eight hundred fifty-eight Spanish-speaking language-minority preschoolers (mean age = 60.83 months, 50.2% female) participated in this study. Results indicated that bifactor models that consisted of language-independent as well as language-specific early literacy factors provided the best fits to the data for children's phonological awareness and print knowledge skills. Correlated factors models that only included skills specific to Spanish and English provided the best fits to the data for children's oral language skills. Children's language-independent early literacy skills were significantly related across constructs and to language-specific aspects of early literacy. Language-specific aspects of early literacy skills were significantly related within but not across languages. These findings suggest that language-minority preschoolers have a common underlying proficiency for code-related skills but not language-related skills that may allow them to transfer knowledge across languages.
Speier, W; Arnold, C; Pouratian, N
The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.
Wang, Felix Hao; Mintz, Toben H
Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Hiemstra, Djoerd; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.
The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database
Hiemstra, Djoerd; Blanken, H.M.; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.
The XML standards that are currently emerging have a number of characteristics that can also be found in database management systems, like schemas (DTDs and XML schema) and query languages (XPath and XQuery). Following this line of reasoning, an XML database might resemble traditional database
Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into
Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.
Mora-Cortes, Anderson; Manyakov, Nikolay V; Chumerin, Nikolay; Van Hulle, Marc M
Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.
Thomas, Michael S C; Knowland, V C P
PURPOSE In this study, the authors used neural network modeling to investigate the possible mechanistic basis of developmental language delay and to test the viability of the hypothesis that persisting delay and resolving delay lie on a mechanistic continuum with normal development. METHOD The authors used a population modeling approach to study individual rates of development in 1,000 simulated individuals acquiring a notional language domain (in this study, represented by English past tense). Variation was caused by differences in internal neurocomputational learning parameters as well as the richness of the language environment. An early language delay group was diagnosed, and individual trajectories were then traced. RESULTS Quantitative variations in learning mechanisms were sufficient to produce persisting delay and resolving delay subgroups in similar proportions to empirical observations. In the model, persisting language delay was caused by limitations in processing capacity, whereas resolving delay was caused by low plasticity. Richness of the language environment did not predict the emergence of persisting delay but did predict the final ability levels of individuals with resolving delay. CONCLUSION Mechanistically, it is viable that persisting delay and resolving delay are only quantitatively different. There may be an interaction between environmental factors and outcome groups, with individuals who have resolving delay being influenced more by the richness of the language environment.
Bisazza, A.; Monz, C.
Class-based language modeling (LM) is a long-studied and effective approach to overcome data sparsity in the context of n-gram model training. In statistical machine translation (SMT), differ- ent forms of class-based LMs have been shown to improve baseline translation quality when used in
Maria Eugenia Guapacha Chamorro
Full Text Available This paper reports an action-research study on language learning strategies in tertiary education at a Colombian university. The study aimed at improving the English language performance and language learning strategies use of 33 first-year pre-service language teachers by combining elements from two models: the cognitive academic language learning approach and task-based language teaching. Data were gathered through surveys, a focus group, students’ and teachers’ journals, language tests, and documentary analysis. Results evidenced that the students improved in speaking, writing, grammar, vocabulary and in their language learning strategies repertoire. As a conclusion, explicit strategy instruction in the proposed model resulted in a proper combination to improve learners’ language learning strategies and performance.
Cochella, C; Lauman, J R; Goede, P; Harnsberger, H R; Katzman, G L
It is challenging to remotely share generic medical case information without an agreed upon definition of a medical digital teaching file (DTF). By utilizing an application of the extensible markup language (XML) called web-distributed data exchange (WDDX) along with an agreed upon WDDX structure, it is technically easy to share or syndicate medical case DTFs across computing environments that use different information models and computer languages. Thus, this easily implemented technology offers us an immediately available means to share and increase the value of scientific knowledge.
This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.
Maks, E.; Tiberius, C.; van Veenendaal, R.; Calzolari, N.; Choukri, K.; Maegaard, B.; Mariani, J.; Odjik, J.; Piperidis, S.; Tapias, D.
The Dutch HLT agency for language and speech technology (known as TST-centrale) at the Institute for Dutch Lexicology is responsible for the maintenance, distribution and accessibility of (Dutch) digital language resources. In this paper we present a project which aims to standardise the format of a
Rector, A L; Bechhofer, S; Goble, C A; Horrocks, I; Nowlan, W A; Solomon, W D
The GALEN representation and integration language (GRAIL) has been developed to support effective clinical user interfaces and extensible re-usable models of medical terminology. It has been used successfully to develop the prototype GALEN common reference (CORE) model for medical terminology and for a series of projects in clinical user interfaces within the GALEN and PEN&PAD projects. GRAIL is a description logic or frame language with novel features to support part-whole and other transitive relations and to support the GALEN modelling style aimed at re-use and application independence. GRAIL began as an experimental language. However, it has clarified many requirements for an effective knowledge representation language for clinical concepts. It still has numerous limitations despite its practical successes. The GRAIL experience is expected to form the basis for future languages which meet the same requirements but have greater expressiveness and more soundly based semantics. This paper provides a description and motivation for the GRAIL language and gives examples of the modelling paradigm which it supports.
Kiram, J. J.; Sulaiman, J.; Swanto, S.; Din, W. A.
This study aims to construct a mathematical model of the relationship between a student's Language Learning Strategy usage and English Language proficiency. Fifty-six pre-university students of University Malaysia Sabah participated in this study. A self-report questionnaire called the Strategy Inventory for Language Learning was administered to them to measure their language learning strategy preferences before they sat for the Malaysian University English Test (MUET), the results of which were utilised to measure their English language proficiency. We attempted the model assessment specific to Multiple Linear Regression Analysis subject to variable selection using Stepwise regression. We conducted various assessments to the model obtained, including the Global F-test, Root Mean Square Error and R-squared. The model obtained suggests that not all language learning strategies should be included in the model in an attempt to predict Language Proficiency.
Sanden, Guro Refsum
Purpose: – The purpose of this paper is to analyse the consequences of globalisation in the area of corporate communication, and investigate how language may be managed as a strategic resource. Design/methodology/approach: – A review of previous studies on the effects of globalisation on corporate...... communication and the implications of language management initiatives in international business. Findings: – Efficient language management can turn language into a strategic resource. Language needs analyses, i.e. linguistic auditing/language check-ups, can be used to determine the language situation...
Organisation models are at the core of enterprise model, since they represent key aspects of a company's action system. Within MEMO, the Organisation Modelling Language (OrgML) supports the construction of organisation models. They can be divided into two main abstractions: a static abstraction is focusing on the structure of an organisation that reflects the division of labour with respect to static responsibilities and a dynamic abstraction that is focusing on models of business processes. ...
Strüber, Daniel; Plöger, Jennifer; Acretoaie, Vlad
Cloning is a convenient mechanism to enable reuse across and within software artifacts. On the downside, it is also a practice related to significant long-term maintainability impediments, thus generating a need to identify clones in affected artifacts. A large variety of clone detection techniques...... has been proposed for programming and modeling languages; yet no specific ones have emerged for model transformation languages. In this paper, we explore clone detection for graph-based model transformation languages. We introduce potential use cases for such techniques in the context of constructive...... and analytical quality assurance. From these use cases, we derive a set of key requirements. We describe our customization of existing model clone detection techniques allowing us to address these requirements. Finally, we provide an experimental evaluation, indicating that our customization of ConQAT, one...
Rivers, Damian J.
Adopting mixed methods of data collection and analysis, the current study models the "perceived value of compulsory English language education" in a sample of 138 undergraduate non-language majors of Japanese nationality at a national university in Japan. During the orientation period of a compulsory 15-week English language programme,…
This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...
Marti, U-V.; Bunke, H.
In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different
Full Text Available Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.
Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D
Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.
Full Text Available Nowadays foreign language competence is one of the main professional skills of mining engineers. Modern competitive conditions require the ability for meeting production challenges in a foreign language from specialists and managers of mining enterprises. This is the reason of high demand on foreign language training/retraining courses. Language training of adult learners fundamentally differs from children and adolescent education. The article describes the features of andragogical learning model. The authors conclude that distance learning is the most productive education form having a number of obvious advantages over traditional (in-class one. Interactive learning method that involves active engagement of adult trainees appears to be of the greatest interest due to introduction of modern information and communication technologies for distance learning.
Bondareva, Evgeniya; Chistyakova, Galina; Kleshevskyi, Yury; Sergeev, Sergey; Stepanov, Aleksey
Nowadays foreign language competence is one of the main professional skills of mining engineers. Modern competitive conditions require the ability for meeting production challenges in a foreign language from specialists and managers of mining enterprises. This is the reason of high demand on foreign language training/retraining courses. Language training of adult learners fundamentally differs from children and adolescent education. The article describes the features of andragogical learning model. The authors conclude that distance learning is the most productive education form having a number of obvious advantages over traditional (in-class) one. Interactive learning method that involves active engagement of adult trainees appears to be of the greatest interest due to introduction of modern information and communication technologies for distance learning.
Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D.
Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208
Tang, Wenxi; Xie, Jing; Lu, Yijuan; Liu, Qizhi; Malone, Daniel; Ma, Aixia
The State Council of China requires that all urban public hospitals must eliminate drug markups by September 2017, and that hospital drugs must be sold at the purchase price. Nanjing-one of the first provincial capital cities to implement the reform-is studied to evaluate the effects of the comprehensive reform on drug prices in public hospitals, and to explore differential compensation plans. Sixteen hospitals were selected, and financial data were collected over the 48-month period before the reform and for 12 months after the reform. An analysis was carried out using a simple linear interrupted time series model. The average difference ratio of drug surplus fell 13.39% after the reform, and the drug markups were basically eliminated. Revenue from medical services showed a net growth of 28.25%. The overall compensation received from government financial budget and medical service revenue growth was 103.69% for the loss from policy-permitted 15% markup sales, and 116.48% for the net loss. However, there were large differences in compensation levels at different hospitals, ranging from -21.92% to 413.74% by medical services revenue growth, causing the combined rate of both financial and service compensation to vary from 28.87-413.74%, There was a significant positive correlation between the services compensation rate and the proportion of medical service revenue (p revenue. Nanjing's pricing and compensation reform has basically achieved the policy targets of eliminating the drug markups, promoting the growth of medical services revenue, and adjusting the structure of medical revenue. However, the growth rate of service revenue of hospitals varied significantly from one another. Nanjing's reform represents successful pricing and compensation reform in Chinese urban public hospitals. It is recommended that a differentiated and dynamic compensation plan should be established in accordance with the revenue structure of different hospitals.
Gilberto Tadeu Lima
Full Text Available Elabora-se um modelo macrodinâmico pós-keynesiano de utilização da capacidade, distribuição e inflação por conflito, no qual a oferta de moeda de crédito é endógena. A taxa nominal de juros é determinada pela aplicação de um mark-up sobre a taxa básica fixada pela autoridade monetária. Ao longo do tempo, o mark-up bancário varia com a taxa de lucro sobre o capital físico, enquanto a taxa básica varia com excessos de demanda que não são acomodáveis pela utilização da capacidade. São analisados os casos em que a demanda é suficiente ou não para gerar a plena utilização da capacidade.It is developed a post-keynesian macrodynamic model of capacity utilization, distribution and conflict inflation, in which the supply of credit-money is endogenous. Nominal interest rate is determined by banks as a markup over the base rate, which is set by the monetary authority. Over time, banking markup varies with firms' profit rate on physical capital, while the base rate rises with any excess demand when capacity is fully utilized. The behavior of the economy is analyzed for the cases in which demand is or is not enough to ensure full capacity utilization.
Bernardo, Allan B I
The study was conducted to determine whether the language of math word problems would affect how Filipino-English bilingual problem solvers would model the structure of these word problems. Modeling the problem structure was studied using the problem-completion paradigm, which involves presenting problems without the question. The paradigm assumes that problem solvers can infer the appropriate question of a word problem if they correctly grasp its problem structure. Arithmetic word problems in Filipino and English were given to bilingual students, some of whom had Filipino as a first language and others who had English as a first language. The problem-completion data and solution data showed similar results. The language of the problem had no effect on problem-structure modeling. The results were discussed in relation to a more circumscribed view about the role of language in word problem solving among bilinguals. In particular, the results of the present study showed that linguistic factors do not affect the more mathematically abstract components of word problem solving, although they may affect the other components such as those related to reading comprehension and understanding.
Naomi Kenney Caselli
Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Cibrario Bertolotti, Ivan; Hu, Tingting; Navet, Nicolas
Fast-paced innovation in the embedded systems domain puts an ever increasing pressure on effective software development methods, leading to the growing popularity of Model-Based Design (MBD). In this context, a proper choice of modeling languages and related tools - depending on design goals and problem qualities - is crucial to make the most of MBD benefits. In this paper, a comparison between two dissimilar approaches to modeling is carried out, with the goal of highlighting their relative ...
Gamatié , Abdoulaye; Gautier , Thierry
This document presents a study on the modeling of architecture components for avionics applications. We consider the avionics standard ARINC 653 specifications as basis, as well as the synchronous language SIGNAL to describe the modeling. A library of APEX object models (partition, process, communication and synchronization services, etc.) has been implemented. This should allow to describe distributed real-time applications using POLYCHRONY, so as to access formal tools and techniques for ar...
Feenstra, Remco; Wieringa, Roelf J.
The syntax of the conceptual model specification language LCM is defined. LCM uses equational logic to specify data types and order-sorted dynamic logic to specify objects with identity and mutable state. LCM specifies database transactions as finite sets of atomic object transitions.
Carter, S.; Monz, C.
This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical
Agents affect events or are in turn affected by them; events are contextualised in space and time; the affective impact is reflected by evaluative language; and cause and effect give the narrative momentum. The aim was to illustrate how the narrativity model could be used to identify and map linguistic features associated with ...
Kaptein, Rianne; Hiemstra, Djoerd; Kamps, Jaap
Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences
Kaptein, R.; Hiemstra, D.; Kamps, J.
Word clouds are a summarised representation of a document’s text, similar to tag clouds which summarise the tags assigned to documents. Word clouds are similar to language models in the sense that they represent a document by its word distribution. In this paper we investigate the differences
Niemi, Timo; Hirvonen, Lasse; Jarvelin, Kalervo
Discusses multidimensional data analysis, or online analytical processing (OLAP), which offer a single subject-oriented source for analyzing summary data based on various dimensions. Develops a conceptual/logical multidimensional model for supporting the needs of informetrics, including a multidimensional query language whose basic idea is to…
Hassan, H.; Sima'an, K.; Way, A.
Syntactically-enriched language models (parsers) constitute a promising component in applications such as machine translation and speech-recognition. To maintain a useful level of accuracy, existing parsers are non-incremental and must span a combinatorially growing space of possible structures as
Van Heerden, C
Full Text Available This paper describes the language modelling (LM) architectures and recognition experiments that enabled support of 'what-with-where' queries on GOOG-411. First the paper compares accuracy trade-offs between a single national business LM for business...
Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems
The understanding of language competition helps us to predict extinction and survival of languages spoken by minorities. A simple agent-based model of a sexual population, based on the Penna model, is built in order to find out under which circumstances one language dominates other ones. This model considers that only young people learn foreign languages. The simulations show a first order phase transition of the ratio between the number of speakers of different languages with the mutation rate as control parameter.
The article analyses the theories on the child's language acquisition and development process (psychological nativism, cognitivism, interactionism, bihaviorism), and it is concluded that the various models of language acquisition raised in these theories depend on language development stage and its representative factors - the dominant neural processes, language acquisition strategies and the results in a context of language development. Speech and language development and their interconnecti...
Van Horne, Sam; Russell, Jae-eun; Schuh, Kathy L.
Researchers have more often examined whether students prefer using an e-textbook over a paper textbook or whether e-textbooks provide a better resource for learning than paper textbooks, but students' adoption of mark-up tools has remained relatively unexamined. Drawing on the concept of Innovation Diffusion Theory, we used educational data mining…
There is a predominant preposition in trade theory that firms operating in an imperfect market with trade barriers often set prices with a positive mark-up. Workers using insider information tend to bargain and share the rent from firms' market power; which is negatively associated with to decline with trade reforms. Empirical ...
Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick
In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US...
In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation...... and programming notation in relation to each other. As the main contribution of the paper we describe a number of general issues to consider when subsuming XML in a given programming language....
In order to develop sustainable waste management systems with considering life cycle perspective, scientists and domain experts in environmental science require readily applicable tools for modeling and evaluating the life cycle impacts of the waste management systems. Practice has proved...... environmental technologies i.e. solid waste management systems. Flow-based programming is used to support concurrent execution of the processes, and provides a model-integration language for composing processes from homogeneous or heterogeneous domains. And a domain-specific language is used to define atomic...... that modeling these systems with general-purpose tools is a cumbersome task. On one hand, the scientists have to spend considerable amount of time to understand these tools in order to develop their models. On another hand, integrated assessments are becoming gradually common in environmental management...
activation. Once phosphorylated, S6K1 then phosphorylates multiple downstream proteins, including 4E-BP and the S6 ribosomal subunit. In vitro...Jain and Bhalla (2009), which describes the pathway in a modular fashion. Our work converted the existing model from Systems Biology Markup Language ...subunit 6 kinase; TSC: tuberous sclerosis complex; Vps34: Class III phosphoinositide 3-kinase This study examined the in vivo kinetics of
Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.
Full Text Available The article reveals a scientifically substantiated process of teaching reading English language periodicals in all its components, which are consistently developed, and form of interconnection of the structural elements in the process of teaching reading. This process is presented as a few interconnected and interdetermined models: 1 the models of the process of acquiring standard and expressive lexical knowledge; 2 the models of the process of formation of skills to use such vocabulary; 3 the models of the development of skills to read texts of the different linguistic levels.
Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is not only a new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.
Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe
Much recent research work discusses the transformation between differentprocess modelling languages. This work, however, is mainly focussed on specific processmodelling languages, and thus the general reusability of the applied transformationconcepts is rather limited. In this paper, we aim...... to abstract from concrete transformationstrategies by distinguishing two major paradigms for process modelling languages:block-oriented languages (such as BPEL and BPML) and graph-oriented languages(such as EPCs and YAWL). The contribution of this paper are generic strategiesfor transforming from block......-oriented process languages to graph-oriented languages,and vice versa. We also present two case studies of applying our strategies....
emphasis on the challenges that faced compilers, and possible solutions. It is assumed that some of .... informants. The researchers and compilers of the Ndebele language corpus had foreseen some of these ..... ence of particular software systems," says Kennedy (1998: 82), "the Standard. Generalised Mark-up Language ...
Full Text Available The semantic models of sentences with verbs of motion in German standard language and in scientific language used in biology are analyzed in the article. In its theoretic part it is affirmed that the article is based on the semantic theory of the sentence. This theory, in its turn, is grounded on the correlation of semantic predicative classes and semantic roles. The combination of semantic predicative classes and semantic roles is expressed by the main semantic formula – proposition. In its practical part the differences between the semantic models of standard and scientific language used in biology are explained. While modelling sentences with verbs of motion, two groups of semantic models of sentences are singled out: that of action (Handlung and process (Vorgang. The analysis shows that the semantic models of sentences with semantic action predicatives dominate in the text of standard language while the semantic models of sentences with semantic process predicatives dominate in the texts of scientific language used in biology. The differences how the doer and direction are expressed in standard and in scientific language are clearly seen and the semantic cases (Agens, Patiens, Direktiv1 help to determine that. It is observed that in scientific texts of high level of specialization (biology science in contrast to popular scientific literature models of sentences with moving verbs are usually seldom found. They are substituted by denominative constructions. In conclusions it is shown that this analysis can be important in methodics, especially planning material for teaching professional-scientific language.
Christensen, P R
Kenya's Radio Language Arts Project, directed by the Academy for Educational Development in cooperation with the Kenya Institute of Education in 1980-85, sought to teach English to rural school children in grades 1-3 through use of an intensive, radio-based instructional system. Daily 1/2 hour lessons are broadcast throughout the school year and supported by teachers and print materials. The project further was aimed at testing the feasibility of adaptation of the successful Nicaraguan Radio Math Project to a new subject area. Difficulties were encountered in articulating a language curriculum with the precision required for a media-based instructional system. Also a challenge was defining the acceptable regional standard for pronunciation and grammar; British English was finally selected. An important modification of the Radio Math model concerned the role of the teacher. While Radio Math sought to reduce the teacher's responsibilities during the broadcast, Radio Language Arts teachers played an important instructional role during the English lesson broadcasts by providing translation and checks on work. Evaluations of the Radio language Arts Project suggest significant gains in speaking, listening, and reading skills as well as high levels of satisfaction on the part of parents and teachers.
Clark, Tony; Van Den Brand, Mark; Combemale, Benoit; Rumpe, Bernhard
International audience; Domain Specific Languages (DSL) have received some prominence recently. Designing a DSL and all their tools is still cumbersome and lots of work. Engineering of DSLs is still at infancy, not even the terms have been coined and agreed on. In particular globalization and all its consequences need to be precisely defined and discussed. This chapter provides a definition of the relevant terms and relates them, such that a conceptual model emerges. The authors think that th...
Pérez Sancho, Carlos; Rizo Valero, David; Iñesta Quereda, José Manuel
Music genre meta-data is of paramount importance for the organisation of music repositories. People use genre in a natural way when entering a music store or looking into music collections. Automatic genre classification has become a popular topic in music information retrieval research both, with digital audio and symbolic data. This work focuses on the symbolic approach, bringing to music cognition some technologies, like the stochastic language models, already successfully applied to text ...
Chanier, Thierry; Pengelly, Michael
The results of past studies in Error Analysis in applied linguistics and the experiences of developers of intelligent tutoring systems in learner modelling have influenced our definition of a new structure, called an "applicable rule", that can be used to help diagnose and to represent a learner's performance in second language learning systems. Based on this structure a prototype interface has been designed to acquire the knowledge that it must contain. The results of experiments with this i...
Rossi, Eleonora; Diaz, Michele T
Healthy non-pathological aging is characterized by cognitive and neural decline, and although language is one of the more stable areas of cognition, older adults often show deficits in language production, showing word finding failures, increased slips of the tongue, and increased pauses in speech. Overall, research on language comprehension in older healthy adults show that it is more preserved than language production. Bilingualism has been shown to confer a great deal of neuroplasticity across the life span, including a number of cognitive benefits especially in executive functions such as cognitive control. Many models of bilingual language processing have been proposed to explain bilingual language processing. However, the question remains open of how such models might be modulated by age-related changes in language. Here, we discuss how current models of language processing in non-pathological aging, and models of bilingual language processing can be integrated to provide new research directions.
Zhang, Ling; Williams, Richard A; Gatherer, Derek
Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly non-computable on a Turing machine. If (M,R) is truly non-computable, there are serious implications for the modelling of large biological networks in computer software. A body of work has now accumulated addressing Rosen's claim concerning (M,R) by attempting to instantiate it in various software systems. However, a conclusive refutation has remained elusive, principally since none of the attempts to date have unambiguously avoided the critique that they have altered the properties of (M,R) in the coding process, producing merely approximate simulations of (M,R) rather than true computational models. In this paper, we use the Unified Modelling Language (UML), a diagrammatic notation standard, to express (M,R) as a system of objects having attributes, functions and relations. We believe that this instantiates (M,R) in such a way than none of the original properties of the system are corrupted in the process. Crucially, we demonstrate that (M,R) as classically represented in the relational biology literature is implicitly a UML communication diagram. Furthermore, since UML is formally compatible with object-oriented computing languages, instantiation of (M,R) in UML strongly implies its computability in object-oriented coding languages. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Morse, Anthony F; Cangelosi, Angelo
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.
Solomon, Steven; van Lent, Michael; Core, Mark; Carpenter, Paul; Rosenberg, Milton
.... The Culturally-Affected Behavior project seeks to define a language for encoding ethnographic data in order to capture cultural knowledge and use that knowledge to affect human behavior models...
Wijayanto, Bangun; Taryana, Acep
Unified Modeling Language (UML) is a language which have come to the standard in industry to visualize, design and document the software system. Using UML we can make model for All software application type, where the application can also written in many language. SMS (Short Message Service) is the best choice to solve geographic problems in spreading information to the alumni Unsoed. The aim of this research is to compile notation of UML (Unified Modeling Language) in development of SMS Serv...
Finin, Timothy; Mayfield, James; Grosof, Benjamin
...: exploring and evaluating how semantic web technology can be integrated into and used by agent based systems, developing techniques for building information retrieval systems using semantic web data...
Although going on the ground with pupils remains the best way to understand the geologic structure of a deposit, it is very difficult to bring them in a mining extraction site and it is impossible to explore whole regions in search of these resources. For those reasons the KML (with the Google earth interface) is a very complete tool for teaching geosciences. Simple and intuitive, its handling is quickly mastered by the pupils, it also allows the teachers to validate skills for IT certificates. It allows the use of KML files stemming from online banks, from personal productions of the teacher or from pupils' works. These tools offer a global approach in 3D as well as a geolocation-based access to any type of geological data. The resource on which I built this KML is taught in the curriculum of the 3 years of French high school, it is methane hydrate. This non conventional hydrocarbon molecule enters in this vague border between mineral an organic matter (as phosphate deposits). It has become for over ten year the subject of the race for the exploitation of the gas hydrates fields in order to try to supply to the world demand. The methane hydrate fields are very useful and interesting to study the 3 majors themes of geological resource: the exploration, the exploitation and the risks especially for environments and populations. The KML which I propose allows the pupils to put itself in the skin of a geologist in search of deposits or on the technician who is going to extract the resource. It also allows them to evaluate the risks connected to the effect of tectonics activity or climatic changes on the natural or catastrophic releasing of methane and its role in the increase of the greenhouse effect. This KML associated to plenty of pedagogic activities is directly downloadable for teachers at http://eduterre.ens-lyon.fr/eduterre-usages/actualites/methane/.
Finin, Timothy; Mayfield, James; Grosof, Benjamin
The UMBC lead DAML project began with the goal to design and prototype critical software components enabling developers to create intelligent software agents capable of understanding and processing...
...) technology and the challenges the federal government faces in implementing it. XML is a flexible, nonproprietary set of standards designed to facilitate the exchange of information among disparate computer systems, using the Internet's protocols...
Hillah, L. M.; Kindler, Ekkart; Kordon, F.
ISO/IEC 15909 is an International Standard that is concerned with the high-level Petri nets. Part 1 defines the concepts, the mathematics, and the graphical notation -- and some variants of high-level nets. Part 2 of ISO/IEC 15909, which is currently under the last ballot to be an International......, it is used in a setting restricted to high-level nets and a simple version of Petri nets called Place/Transition-Systems. Future parts, of PNML will use the generality of PNML and also standardise some of its other concepts. For example, it is planned that Part 3 will define a module concepts known from...... modular PNML and will make the concept for defining new Petri net types explicit. In this paper, discuss PNML, its relation to ISO/IEC 15909 and the main ideas for the future extensions of PNML and its standardisation in Part 3 of ISO/IEC 15909-2....
S. Pemberton (Steven); not CWI et al
textabstractThis specification defines XHTML 1.0, a reformulation of HTML 4 as an XML 1.0 application, and three DTDs corresponding to the ones defined by HTML 4. The semantics of the elements and their attributes are defined in the W3C Recommendation for HTML 4. These semantics provide the
Evensen, Kenneth D.; Weiss, Kathryn Anne
A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.
Mengi V. Ondar
Full Text Available Contemporary information technologies and mathematical modelling has made creating corpora of natural languages significantly easier. A corpus is an information and reference system based on a collection of digitally processed texts. A corpus includes various written and oral texts in the given language, a set of dictionaries and markup – information on the properties of the text. It is the presence of the markup which distinguishes a corpus from an electronic library. At the moment, national corpora are being set up for many languages of the Russian Federation, including those of the Turkic peoples. Faculty members, postgraduate and undergraduate students at Tuvan State University and Siberian Federal University are working on the National corpus of Tuvan language. This article describes the structure of a dictionary entry in the National corpus of Tuvan language. The corpus database comprises the following tables: MAIN – the headword table, RUS, ENG, GER — translations of the headword into three languages, MORPHOLOGY — the table containing morphological data on the headword. The database is built in Microsoft Office Access. Working with the corpus dictionary includes the following functions: adding, editing and removing an entry, entry search (with transcription, setting and visualizing morphological features of a headword. The project allows us to view the corpus dictionary as a multi-structure entity with a complex hierarchical structure and a dictionary entry as its key component. The corpus dictionary we developed can be used for studying Tuvan language in its pronunciation, orthography and word analysis, as well as for searching for words and collocations in the texts included into the corpus.
The aim of this exploratory study was to examine the role of the "First Language First" model for preschool bilingual education in the development of vocabulary depth. The languages studied were Russian (L1) and Hebrew (L2) among bilingual children aged 4-5 years in Israel. According to this model, the children's first language of…
Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.
We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be
Aycock, Dawn M; Sims, Traci T; Florman, Terri; Casseus, Karis T; Gordon, Paula M; Spratling, Regena G
Some words and phrases used by health care providers may be perceived as insensitive by patients, which could negatively affect patient outcomes and satisfaction. However, a distinct concept that can be used to describe and synthesize these words and phrases does not exist. The purpose of this article is to propose the concept of language sensitivity, defined as the use of respectful, supportive, and caring words with consideration for a patient's situation and diagnosis. Examples of how language sensitivity may be lacking in nurse-patient interactions are described, and solutions are provided using the RESPECT (Rapport, Environment/Equipment, Safety, Privacy, Encouragement, Caring/Compassion, and Tact) model. RESPECT can be used as a framework to inform and remind nurses about the importance of sensitivity when communicating with patients. Various approaches can be used by nurse educators to promote language sensitivity in health care. Case studies and a lesson plan are included. J Contin Educ Nurs. 2017;48(11):517-524. Copyright 2017, SLACK Incorporated.
et al., 1998, p. 306; cf. Bundsgaard, 2005, p. 315ff.). At least five challenges can be identified (Barron et al., 1998; Bundsgaard, 2009, 2010; Gregersen & Mikkelsen, 2007; Krajcik et al., 1998): Organizing collaboration, structuring workflows, integrating academic content, sharing products...... of such a platform. The Collaborative Learning Modeling Language (ColeML) makes it possible to articulate complex designs for learning visually and to activate these design models as interactive learning materials. ColeML is based on research in workflow and business process modeling. The traditional approach...... in this area, represented by, for example, the Workflow Management Coalition (Hollingsworth, 1995) and the very widespread standard Business Process Modeling and Notation (BPMN), has been criticized on the basis of research in knowledge work processes. Inspiration for ColeML is found in this research area...
The EEC Esprit Basic Research Action No 3011, Models, Languages and Logics for Con current Distributed Systems, CEDISYS, held its second workshop at Aarhus University in May, l991, following the successful workshop in San Miniato in 1990. The Aarhus Workshop was centered around CEDISYS research...... activities, and the selected themes of Applications and Automated Tools in the area of Distributed Systerns. The 24 participants were CEDISYS partners, and invited guests with expertise on the selected themes. This booklet contains the program of the workshop, short abstracts for the talks presented...
Tran, Nam; Michel, George; Krauthammer, Michael; Shiffman, Richard N
The Guideline Elements Model (GEM) uses XML to represent the heterogeneous knowledge contained in clinical practice guidelines. GEM has important applications in computer aided guideline authoring and clinical decision support systems. However, its XML representation format could limit its potential impact, as semantic web ontology languages, such as OWL, are becoming major knowledge representation frameworks in medical informatics. In this work, we present a faithful translation of GEM from XML into OWL. This translation is intended to keep the knowledge model of GEM intact, as this knowledge model has been carefully designed and has become a recognized standard. An OWL representation would make GEM more applicable in medical informatics systems that rely on semantic web. This work will also be the initial step in making GEM a guideline recommendation ontology.
Full Text Available Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realised on a variety of physiological and health care use cases. The last fifteen years has seen the rise of very cheap digital storage both on and off cite. With the advent of the 'Internet of Things' people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to 'in silico' modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realised. We conclude with a discussion on future possibilities that big data and further standardizations will enable.
Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe
to abstract from concrete transformation strategies by distinguishing two major paradigms for representing control flow in process modelling languages: block-oriented languages (such as BPEL and BPML) and graph-oriented languages (such as EPCs and YAWL). The contribution of this paper are generic strategies......Much recent research work discusses the transformation between different process modelling languages. This work, however, is mainly focussed on specific process modelling languages, and thus the general reusability of the applied transformation concepts is rather limited. In this paper, we aim...
Ramus, Franck; Marshall, Chloe R.; Rosen, Stuart
An on-going debate surrounds the relationship between specific language impairment and developmental dyslexia, in particular with respect to their phonological abilities. Are these distinct disorders? To what extent do they overlap? Which cognitive and linguistic profiles correspond to specific language impairment, dyslexia and comorbid cases? At least three different models have been proposed: the severity model, the additional deficit model and the component model. We address this issue by comparing children with specific language impairment only, those with dyslexia-only, those with specific language impairment and dyslexia and those with no impairment, using a broad test battery of language skills. We find that specific language impairment and dyslexia do not always co-occur, and that some children with specific language impairment do not have a phonological deficit. Using factor analysis, we find that language abilities across the four groups of children have at least three independent sources of variance: one for non-phonological language skills and two for distinct sets of phonological abilities (which we term phonological skills versus phonological representations). Furthermore, children with specific language impairment and dyslexia show partly distinct profiles of phonological deficit along these two dimensions. We conclude that a multiple-component model of language abilities best explains the relationship between specific language impairment and dyslexia and the different profiles of impairment that are observed. PMID:23413264
Phillips, Lawrence; Pearl, Lisa
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Al-Sayed, Rania Kamal Muhammad; Abdel-Haq, Eman Muhammad; El-Deeb, Mervat Abou-Bakr; Ali, Mahsoub Abdel-Sadeq
The present study aimed at developing English language planning strategy of second year distinguished governmental language preparatory school pupils using the a WebQuest model. Fifty participants from second year at Hassan Abu-Bakr Distinguished Governmental Language School at Al-Qanater Al-Khairia (Qalubia Governorate) were randomly assigned…
Koper, Rob; Manderveld, Jocelyn
Koper, E, J, R., & Manderveld, J. M. (2004). Educational modelling language: modelling reusable, interoperable, rich and personalised units of learning. British Journal of Educational Technology, 35 (5), 537-552.
Please refer to the printed version of the article. Rob Koper and
Jensen, Kristian; Cardoso, Joao; Sonnenschein, Nikolaus
Optlang is a Python package implementing a modeling language for solving mathematical optimization problems, i.e., maximizing or minimizing an objective function over a set of variables subject to a number of constraints. It provides a common native Python interface to a series of optimization...... tools, so different solver backends can be used and changed in a transparent way. Optlang’s object-oriented API takes advantage of the symbolic math library SymPy (Team 2016) to allow objective functions and constraints to be easily formulated algebraically from symbolic expressions of variables....... Optlang targets scientists who can thus focus on formulating optimization problems based on mathematical equations derived from domain knowledge. Solver interfaces can be added by subclassing the four main classes of the optlang API (Variable, Constraint, Objective, and Model) and implementing...
We show how events can be modeled in terms of UML. We view events as change agents that have consequences and as information objects that represent information. We show how to create object-oriented structures that represent events in terms of attributes, associations, operations, state charts, a...
Filippi, R.; Karaminis, T.; Thomas, M.
One key issue in bilingualism is how bilinguals control production, particularly to produce words in the less dominant language. Language switching is one method to investigate control processes. Language switching has been much studied in comprehension, e.g., in lexical decision task, but less so in production. Here we first present a study of language switching in Italian–English adult bilinguals in a naming task for visually presented words. We demonstrate an asymmetric pattern of time cos...
Shearer, Samuel R.
Approved for public release; distribution is unlimited Loss of foreign language proficiency is a major concern for the Department of Defense (DoD). Despite significant expenditures to develop and sustain foreign language skills in the armed forces, the DoD has not been able to create a sufficient pool of qualified linguists. Many theories and hypotheses about the learning of foreign languages are not based on cognitive processes and lack the ability to explain how and why foreign language ...
Joutsenlahti, Jorma; Kulju, Pirjo
The purpose of this study is to present a multimodal languaging model for mathematics education. The model consists of mathematical symbolic language, a pictorial language, and a natural language. By applying this model, the objective was to study how 4th grade pupils (N = 21) understand the concept of division. The data was collected over six…
Gerlach, Martin; Altmann, Eduardo G.
We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii) the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.
The focus of this research is on the nature of lexical cross-linguistic influence (CLI) between non-native languages. Using oral interviews with 157 L1 Italian high-school students studying English and German as non-native languages, the project investigated which kinds of lexis appear to be more susceptible to transfer from German to English and…
Jiang, Jiepu; Lu, Wei; Rong, Xianqian; Gao, Yangyan
In this paper, we propose two methods to adapt language modeling methods for expert search to the INEX entity ranking task. In our experiments, we notice that language modeling methods for expert search, if directly applied to the INEX entity ranking task, cannot effectively distinguish entity types. Thus, our proposed methods aim at resolving this problem. First, we propose a method to take into account the INEX category query field. Second, we use an interpolation of two language models to rank entities, which can solely work on the text query. Our experiments indicate that both methods can effectively adapt language modeling methods for expert search to the INEX entity ranking task.
Full Text Available The aim of this paper is to outline classroom tandem by comparing it with informal tandem learning contexts and other language instruction methods. Classroom tandem is used for second language instruction in mixed language groups in the subjects of Finnish and Swedish as L2. Tandem learning entails that two persons with different mother tongues learn each other’s native languages in reciprocal cooperation. The students function, in turns, as a second language learner and as a model in the native language. We aim to give an overview description of the interaction in classroom tandem practice. The empirical data consists of longitudinal video recordings of meetings of one tandem dyad within a co-located Swedishmedium and Finnish-medium school. Focus in the analysis is on the language aspects the informants orient to and topicalize in their interaction. The language aspects vary depending on what classroom activities they are engaged in, text-based or oral activities.
Norland, Deborah; Pruett-Said, Terry
Written by teachers for teachers, "A Kaleidoscope of Models and Strategies for Teaching English to Speakers of Other Languages," is a practical introduction to models and strategies employed in the teaching of English language learners. Each chapter discusses several models and/or strategies by focusing on particular methods and gives the…
Webb, Ken; White, Tony
The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.
Mulyar, Nataliya; van der Aalst, Wil M P; Peleg, Mor
Languages used to specify computer-interpretable guidelines (CIGs) differ in their approaches to addressing particular modeling challenges. The main goals of this article are: (1) to examine the expressive power of CIG modeling languages, and (2) to define the differences, from the control-flow perspective, between process languages in workflow management systems and modeling languages used to design clinical guidelines. The pattern-based analysis was applied to guideline modeling languages Asbru, EON, GLIF, and PROforma. We focused on control-flow and left other perspectives out of consideration. We evaluated the selected CIG modeling languages and identified their degree of support of 43 control-flow patterns. We used a set of explicitly defined evaluation criteria to determine whether each pattern is supported directly, indirectly, or not at all. PROforma offers direct support for 22 of 43 patterns, Asbru 20, GLIF 17, and EON 11. All four directly support basic control-flow patterns, cancellation patterns, and some advance branching and synchronization patterns. None support multiple instances patterns. They offer varying levels of support for synchronizing merge patterns and state-based patterns. Some support a few scenarios not covered by the 43 control-flow patterns. CIG modeling languages are remarkably close to traditional workflow languages from the control-flow perspective, but cover many fewer workflow patterns. CIG languages offer some flexibility that supports modeling of complex decisions and provide ways for modeling some decisions not covered by workflow management systems. Workflow management systems may be suitable for clinical guideline applications.
Tedyyana, Agus; Danuri; Lidyawati
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
Vazquez, F.; Castelló, X.; San Miguel, M.
We investigate the dynamics of two agent based models of language competition. In the first model, each individual can be in one of two possible states, either using language X or language Y, while the second model incorporates a third state XY, representing individuals that use both languages (bilinguals). We analyze the models on complex networks and two-dimensional square lattices by analytical and numerical methods, and show that they exhibit a transition from one-language dominance to language coexistence. We find that the coexistence of languages is more difficult to maintain in the bilinguals model, where the presence of bilinguals facilitates the ultimate dominance of one of the two languages. A stability analysis reveals that the coexistence is more unlikely to happen in poorly connected than in fully connected networks, and that the dominance of just one language is enhanced as the connectivity decreases. This dominance effect is even stronger in a two-dimensional space, where domain coarsening tends to drive the system towards language consensus.
Vazquez, F; Castelló, X; San Miguel, M
We investigate the dynamics of two agent based models of language competition. In the first model, each individual can be in one of two possible states, either using language X or language Y, while the second model incorporates a third state XY, representing individuals that use both languages (bilinguals). We analyze the models on complex networks and two-dimensional square lattices by analytical and numerical methods, and show that they exhibit a transition from one-language dominance to language coexistence. We find that the coexistence of languages is more difficult to maintain in the bilinguals model, where the presence of bilinguals facilitates the ultimate dominance of one of the two languages. A stability analysis reveals that the coexistence is more unlikely to happen in poorly connected than in fully connected networks, and that the dominance of just one language is enhanced as the connectivity decreases. This dominance effect is even stronger in a two-dimensional space, where domain coarsening tends to drive the system towards language consensus
Hiemstra, Djoerd; de Vries, A.P.
During the last two years, exciting new approaches to information retrieval were introduced by a number of different research groups that use statistical language models for retrieval. This paper relates the retrieval algorithms suggested by these approaches to widely accepted retrieval algorithms
Abdullah, Muhammad Ridhuan Tony Lim; Siraj, Saedah; Asra; Hussin, Zaharah
In the field of distance education, learning mediated through mobile technology or mobile learning (mLearning) has rapidly building a repertoire of influence in distance education research. This paper aims to propose an mLearning curriculum implementation model for English Language and Communication skills course among undergraduates using…
Gallagher, H. Colin; Robins, Garry
As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…
Zanev, Vladimir; Radenski, Atanas
This paper analyzes difficulties with the introduction of object-oriented concepts in introductory computing education and then proposes a two-language, two-paradigm curriculum model that alleviates such difficulties. Our two-language, two-paradigm curriculum model begins with teaching imperative programming using Python programming language, continues with teaching object-oriented computing using Java, and concludes with teaching object-oriented data structures with Java.
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
Fang, Gang; Zhang, Shemin; Dong, Yafei
By successively assembling genetic parts such as BioBrick according to grammatical models, complex genetic constructs composed of dozens of functional blocks can be built. However, usually every category of genetic parts includes a few or many parts. With increasing quantity of genetic parts, the process of assembling more than a few sets of these parts can be expensive, time consuming and error prone. At the last step of assembling it is somewhat difficult to decide which part should be selected. Based on statistical language model, which is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence, the most commonly used parts will be selected. Then, a dynamic programming algorithm was designed to figure out the solution of maximum probability. The algorithm optimizes the results of a genetic design based on a grammatical model and finds an optimal solution. In this way, redundant operations can be reduced and the time and cost required for conducting biological experiments can be minimized. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available The main goal of the paper is to present a putative role of consciousness in language capacity. The paper contrasts the two approaches characteristic for cognitive semiotics and cognitive science. Language is treated as a mental phenomenon and a cognitive faculty (in contrast to approaches that define language as a primarily social phenomenon. The analysis of language activity is based on the Chalmers’ (1996 distinction between the two forms of consciousness: phenomenal (simply “consciousness” and psychological (“awareness”. The approach is seen as an alternative to phenomenological analyses typical for cognitive semiotics.
.... Initiated in 2004 at Defense Research and Development Canada (DRDC), the SACOT knowledge engineering research project is currently investigating, developing and validating innovative natural language processing (NLP...
The study presented in this paper was conducted within the theoretical framework of the three-dimensional global-trait model of lexical knowledge proposed by [Henrikson, B. 1999. Three dimensions of vocabulary development. Studies in Second Language Acquisition 21, pp. 303-317], consisting of "breadth," "depth," and "receptive-productive"…
Altstaedter, Laura Levi; Smith, Judith J.; Fogarty, Elizabeth
This overview article focuses on the co-teaching model, an innovative and comprehensive model for student teaching experiences that provides opportunities to foreign language preservice teachers to develop their knowledge base about teaching and learning foreign languages while gaining in other areas: autonomy, collaboration, and agency. The…
Hagoort, P.; Hickok, G.; Small, S.L.
A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological
Full Text Available A primary challenge given to university foreign language departments and Title VI National Resource Centers is to increase interest and participation in foreign language learning, with particular emphasis on less commonly taught languages (LCTLs. Given that many LCTLs in high demand by the US government, including Arabic, Chinese, Persian and Turkish, rarely find their way into the school curricula, this article offers a successful ongoing community-based model of how one university-town partnership addresses advocacy with programming for pre-K-grade 9. Non-native and heritage undergraduate language students who volunteered as community language teachers found the experience invaluable to their pedagogical development. Teacher education programs or language departments can employ this approach to community-based teaching, by providing free, sustained language teaching in existing community centers. This article offers guidance for how to start and expand such a program.
Abriam-Yago, K; Yoder, M; Kataoka-Yahiro, M
The health care system requires nurses with the language ability and the cultural knowledge to meet the health care needs of ethnic minority immigrants. The recruitment, admission, retention, and graduation of English as a Second Language (ESL) students are essential to provide the workforce to meet the demands of the multicultural community. Yet, ESL students possess language difficulties that affect their academic achievement in nursing programs. The application of the Cummins Model of language proficiency is discussed. The Cummins Model provides a framework for nursing faculty to develop educational support that meets the learning needs of ESL students.
Hung, Nguyen Viet
Researches of English language teaching (ELT) have focused on using mother tongue (L1) for years. The proliferation of task-based language teaching (TBLT) has been also occurred. Considerable findings have been made in the existing literature of the two fields; however, no mentions have been made in the combination of these two ELT aspects, i.e.,…
Moughamian, Ani C.; Rivera, Mabel O.; Francis, David J.
This publication seeks to offer educators and policy-makers guidance on strategies that have been effective in instructing English language learners (ELLs). The authors begin by outlining key contextual factors that decision-makers should take into account when making instructional choices for English language learners, then follow with a brief…
Much previous research has pointed to the need for a unified framework for language contact phenomena -- one that would include social factors and motivations, structural factors and linguistic constraints, and psycholinguistic factors involved in processes of language processing and production. While Contact Linguistics has devoted a great deal…
Full Text Available Unified Modeling Language (UML is a language which have come to the standard in industry to visualize, design and document the software system. Using UML we can make model for All software application type, where the application can also written in many language. SMS (Short Message Service is the best choice to solve geographic problems in spreading information to the alumni Unsoed. The aim of this research is to compile notation of UML (Unified Modeling Language in development of SMS Server for Alumni Unsoed. This research is conducted with software engineer method. The design result of software SMS alumni Unsoed present that UML (Unified Modeling Language help in design and software programming
Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo
For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.
Risner, Mary E; Markley, Linda
As world economies become more connected, it is increasingly important to prepare students with language and cultural skills necessary to work on internationally diverse teams within the US or abroad. Since the use of language and culture for the workplace has not been a traditional focus in coursework, professional development for foreign language teachers must expand to include best practices, resources, and program models that develop globally competent citizens for twenty-first-century ca...
TEMİZKAN, Rahman; TEMİZKAN, Saadet Pınar
Abstract Referring to the fact that the tourism sector offers a lot of job opportunities tourism undergraduate programs have an important place in the tourism education system in Turkey.Due to the importance of foreign language (FL) skills in the tourism sector, it is necessary to revise foreign language teaching models of these programs. The purpose of the study is to determine the perceptions, expectations and suggestions of students and faculty members regarding the foreign language teachi...
Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung
A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.
Kudryashova Alexandra V.
Full Text Available The present paper deals with studying theoretical and methodical background of the concept of blended learning, which is the major didactic tool of the modern methods of foreign languages teaching. It also considers the principles of integrating blended learning in teaching foreign languages in engineering institutions. The basics of pedagogical modelling used for developing a model of integrating blended learning in the foreign language teaching are defined. The schematic model representation is given and the way of implementing the described model into the educational process is shown via the example of the lesson on “Cohesive devices”.
Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a Ã¢Â€Âmetadata ecologyÃ¢Â€Â, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is Ã¢Â€Âœpointed inÃ¢Â€Â as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a Ã¢Â€Âœmetadata ecology".
Ghijsen, M.; van der Ham, J.; Grosso, P.; de Laat, C.
This paper describes the Infrastructure and Network Description Language (INDL). The aim of INDL is to provide technology independent descriptions of computing infrastructures. These descriptions include the physical resources and the network infrastructure that connects these resources. The
Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.
Wombacher, Andreas; Aberer, K.
Automatizing information commerce requires languages to represent the typical information commerce processes. Existing languages and standards cover either only very specific types of business models or are too general to capture in a concise way the specific properties of information commerce
Using structural equation modeling analysis, this study examined the contribution of vocabulary and grammatical knowledge to second language reading comprehension among 190 advanced Chinese English as a foreign language learners. Vocabulary knowledge was measured in both breadth (Vocabulary Levels Test) and depth (Word Associates Test);…
Language teachers are called upon to understand both the nature of students' intercultural competence and their own role in its development. Limited research attention has been paid to the relationship between the types of behaviour that language teachers model and the intercultural competence their students acquire. This article reports on a case…
Đolić Slobodanka R.
This article will discuss one possible model of teaching English as a foreign language with intercultural requirements, environmental psychological influences and active and genuine participation of learners as issues that help develop learning skills to negotiate meanings across languages and cultures. Environmental conditions are considered central to developing teaching and learning abilities. This discussion is based on two theoretical concepts: intercultural communicative competence (Byr...
Mabbott, Ann; And Others
This paper argues that instruction can play a significant role in second language acquisition (SLA) and that the acculturation process can, to some extent, take place in the second language classroom as well as the naturalistic setting. J. H. Schumann's acculturation model of SLA contends that learners will succeed in SLA only to the extent they…
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Alghbban, Mohammed I.; Ben Salamh, Sami; Maalej, Zouheir
The current article investigates teachers' metaphoric modeling of foreign language teaching and learning at the College of Languages and Translation, King Saud University. It makes use of teaching philosophy statements as a corpus. Our objective is to analyze the underlying conceptualizations of teaching/learning, the teachers' perception of the…
Watson, Gina D.; Bellon-Harn, Monica L.
Tier 2 supplemental instruction within a response to intervention framework provides a unique opportunity for developing partnerships between speech-language pathologists and classroom teachers. Speech-language pathologists may participate in Tier 2 instruction via a consultative or collaborative service delivery model depending on district needs.…
Ciechanowski, Kathryn M.
This research explores third-grade science and language instruction for emergent bilinguals designed through a framework of planning, lessons, and assessment in an interconnected model including content, linguistic features, and functions. Participants were a team of language specialist, classroom teacher, and researcher who designed…
Önem, Evrim; Ergenç, Iclal
Much research has shown that there is a negative relationship between high levels of anxiety and success for English language teaching. This paper aimed to test a model of teaching for anxiety and success in English language teaching to affect anxiety and success levels at the same time in a control-experiment group with pre- and post-test study…
Raikov, Ivan; De Schutter, Erik
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
Grando, M Adela; Glasspool, David; Fox, John
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified
The Java programming language provides safety and security guarantees such as type safety and its security architecture. They distinguish it from other mainstream programming languages like C and C++. In this work, we develop a machine-checked model of concurrent Java and the Java memory model and investigate the impact of concurrency on these guarantees. From the formal model, we automatically obtain an executable verified compiler to bytecode and a validated virtual machine.
Yang, Chihae; Tarkhov, Aleksey; Marusczyk, Jörg; Bienfait, Bruno; Gasteiger, Johann; Kleinoeder, Thomas; Magdziarz, Tomasz; Sacher, Oliver; Schwab, Christof H; Schwoebel, Johannes; Terfloth, Lothar; Arvidson, Kirk; Richard, Ann; Worth, Andrew; Rathman, James
Chemotypes are a new approach for representing molecules, chemical substructures and patterns, reaction rules, and reactions. Chemotypes are capable of integrating types of information beyond what is possible using current representation methods (e.g., SMARTS patterns) or reaction transformations (e.g., SMIRKS, reaction SMILES). Chemotypes are expressed in the XML-based Chemical Subgraphs and Reactions Markup Language (CSRML), and can be encoded not only with connectivity and topology but also with properties of atoms, bonds, electronic systems, or molecules. CSRML has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory, and commercial-use chemical space, as well as to represent chemical patterns and properties especially relevant to various toxicity concerns. A software application, ChemoTyper has also been developed and made publicly available in order to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML-based CSRML standard used to express chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge.
Full Text Available The tradition of incorporating CALL into the language-learning curriculum goes back to the early 1980s at Coventry University, and since then has evolved in keeping with changes in the technology available (Corness 1984; Benwell 1986; Orsini-Jones 1987; Corness et al 1992; Orsini-Jones 1993. Coventry University is at present pioneering the integration of hypermedia into the curriculum for the teaching of Italian language and society. The syllabus for a complete module of the BA Modern Languages and BA European Studies Degrees, which will count as l/8th of the students' programme for year 2, has been designed upon in-house produced hypermedia courseware.
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.
Hummel, Hans; Manderveld, Jocelyn; Tattersall, Colin; Koper, Rob
Published: Hummel, H. G. K., Manderveld, J. M., Tattersall, C.,& Koper, E. J. R. (2004). Educational Modelling Language: new challenges for instructional re-usability and personalized learning. International Journal of Learning Technology, 1, 1, 110-111.
This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…
Palafox, Benjamin; Patouillard, Edith; Tougher, Sarah; Goodman, Catherine; Hanson, Kara; Kleinschmidt, Immo; Torres Rueda, Sergio; Kiefer, Sabine; O'Connell, Kate; Zinsou, Cyprien; Phok, Sochea; Akulayi, Louis; Arogundade, Ekundayo; Buyungo, Peter; Mpasela, Felton; Poyer, Stephen; Chavasse, Desmond
The private for-profit sector is an important source of treatment for malaria. However, private patients face high prices for the recommended treatment for uncomplicated malaria, artemisinin combination therapies (ACTs), which makes them more likely to receive cheaper, less effective non-artemisinin therapies (nATs). This study seeks to better understand consumer antimalarial prices by documenting and exploring the pricing behaviour of retailers and wholesalers. Using data collected in 2009-10, we present survey estimates of antimalarial retail prices, and wholesale- and retail-level price mark-ups from six countries (Benin, Cambodia, the Democratic Republic of Congo, Nigeria, Uganda and Zambia), along with qualitative findings on factors affecting pricing decisions. Retail prices were lowest for nATs, followed by ACTs and artemisinin monotherapies (AMTs). Retailers applied the highest percentage mark-ups on nATs (range: 40% in Nigeria to 100% in Cambodia and Zambia), whereas mark-ups on ACTs (range: 22% in Nigeria to 71% in Zambia) and AMTs (range: 22% in Nigeria to 50% in Uganda) were similar in magnitude, but lower than those applied to nATs. Wholesale mark-ups were generally lower than those at retail level, and were similar across antimalarial categories in most countries. When setting prices wholesalers and retailers commonly considered supplier prices, prevailing market prices, product availability, product characteristics and the costs related to transporting goods, staff salaries and maintaining a property. Price discounts were regularly used to encourage sales and were sometimes used by wholesalers to reward long-term customers. Pricing constraints existed only in Benin where wholesaler and retailer mark-ups are regulated; however, unlicensed drug vendors based in open-air markets did not adhere to the pricing regime. These findings indicate that mark-ups on antimalarials are reasonable. Therefore, improving ACT affordability would be most readily
Singh, G.; de By, R.A.
We study the problem of assigning a spatial extent to a text phrase such as central northern California', with the objective of allowing spatial interpretations of natural language, and consistency testing of complex utterances that involve multiple phrases from which spatial extent can be derived.
Rodríguez, J. Tinguaro; Franco, Camilo; Montero, Javier
The evidence coming from cognitive psychology and linguistics shows that pairs of reference concepts (as e.g. good/bad, tall/short, nice/ugly, etc.) play a crucial role in the way we everyday use and understand natural languages in order to analyze reality and make decisions. Different situations...
This article critiques traditional single-level statistical approaches (e.g., multiple regression analysis) to examining relationships between language test scores and variables in the assessment setting. It highlights the conceptual, methodological, and statistical problems associated with these techniques in dealing with multilevel or nested…
Klarlund, Nils; Møller, Anders; Schwartzbach, Michael Ignatieff
XML (Extensible Markup Language), a linear syntax for trees, has gathered a remarkable amount of interest in industry. The acceptance of XML opens new venues for the application of formal methods such as specification of abstract syntax tree sets and tree transformations. A user domain may...... be specified as a set of trees. For example, XHTML is a user domain corresponding to a set of XML documents that make sense as hypertext. A notation for defining such a set of XML trees is called a schema language. We believe that a useful schema notation must identify most of the syntactic requirements...... to the DSD (Document Structure Description) notation as our bid on how to meet these requirements. The DSD notation was inspired by industrial needs. We show how DSDs help manage aspects of complex XML software through a case study about interactive voice response systems, i.e., automated telephone answering...
Corless, Inge B.; Limbo, Rana; Bousso, Regina Szylit; Wrenn, Robert L.; Head, David; Lickiss, Norelle; Wass, Hannelore
The aim of this work is to provide an overview of the key features of the expressions of grief. Grief is a response to loss or anticipated loss. Although universal, its oral and nonverbal expression varies across cultures and individuals. Loss is produced by an event perceived to be negative to varying degrees by the individuals involved and has the potential to trigger long-term changes in a person's cognitions and relationships. The languages used by the bereaved to express grief differ from the language used by professionals, creating dissonance between the two. Data were obtained from English language Medline and CINAHL databases, from professional and personal experiences, interviews with experts, and exploration of cemetery memorials. Blog websites and social networks provided additional materials for further refinement of the model. Content analysis of the materials and agreement by the authors as to the themes resulted in the development of the model. To bridge the gap between professional language and that used by the bereaved, a Languages of Grief model was developed consisting of four Modes of Expression, four Types of Language, plus three Contingent Factors. The Languages of Grief provides a framework for comprehending the grief of the individual, contributing to clinical understanding, and fruitful exploration by professionals in better understanding the use of languages by the bereaved. Attention to the Modes of Expression, Types of Language, and Contingent Factors provides the professional with a richer understanding of the grieving individual, a step in providing appropriate support to the bereaved. The Languages of Grief provides a framework for application to discrete occurrences with the goal of understanding grief from the perspective of the bereaved. PMID:25750773
Miller, Andrew K; Britten, Randall D; Nielsen, Poul M F
An important aspect of multi-scale modelling is the ability to represent mathematical models in forms that can be exchanged between modellers and tools. While the development of languages like CellML and SBML have provided standardised declarative exchange formats for mathematical models, independent of the algorithm to be applied to the model, to date these standards have not provided a clear mechanism for describing parameter uncertainty. Parameter uncertainty is an inherent feature of many real systems. This uncertainty can result from a number of situations, such as: when measurements include inherent error; when parameters have unknown values and so are replaced by a probability distribution by the modeller; when a model is of an individual from a population, and parameters have unknown values for the individual, but the distribution for the population is known. We present and demonstrate an approach by which uncertainty can be described declaratively in CellML models, by utilising the extension mechanisms provided in CellML. Parameter uncertainty can be described declaratively in terms of either a univariate continuous probability density function or multiple realisations of one variable or several (typically non-independent) variables. We additionally present an extension to SED-ML (the Simulation Experiment Description Markup Language) to describe sampling sensitivity analysis simulation experiments. We demonstrate the usability of the approach by encoding a sample model in the uncertainty markup language, and by developing a software implementation of the uncertainty specification (including the SED-ML extension for sampling sensitivty analyses) in an existing CellML software library, the CellML API implementation. We used the software implementation to run sampling sensitivity analyses over the model to demonstrate that it is possible to run useful simulations on models with uncertainty encoded in this form.
Andrew K Miller
Full Text Available An important aspect of multi-scale modelling is the ability to represent mathematical models in forms that can be exchanged between modellers and tools. While the development of languages like CellML and SBML have provided standardised declarative exchange formats for mathematical models, independent of the algorithm to be applied to the model, to date these standards have not provided a clear mechanism for describing parameter uncertainty. Parameter uncertainty is an inherent feature of many real systems. This uncertainty can result from a number of situations, such as: when measurements include inherent error; when parameters have unknown values and so are replaced by a probability distribution by the modeller; when a model is of an individual from a population, and parameters have unknown values for the individual, but the distribution for the population is known. We present and demonstrate an approach by which uncertainty can be described declaratively in CellML models, by utilising the extension mechanisms provided in CellML. Parameter uncertainty can be described declaratively in terms of either a univariate continuous probability density function or multiple realisations of one variable or several (typically non-independent variables. We additionally present an extension to SED-ML (the Simulation Experiment Description Markup Language to describe sampling sensitivity analysis simulation experiments. We demonstrate the usability of the approach by encoding a sample model in the uncertainty markup language, and by developing a software implementation of the uncertainty specification (including the SED-ML extension for sampling sensitivty analyses in an existing CellML software library, the CellML API implementation. We used the software implementation to run sampling sensitivity analyses over the model to demonstrate that it is possible to run useful simulations on models with uncertainty encoded in this form.
Dolin, R H; Alschuler, L; Behlen, F; Biron, P V; Boyer, S; Essin, D; Harding, L; Lincoln, T; Mattison, J E; Rishel, W; Sokolowski, R; Spinosa, J; Williams, J P
The HL7 SGML/XML Special Interest Group is developing the HL7 Document Patient Record Architecture. This draft proposal strives to create a common data architecture for the interoperability of healthcare documents. Key components are that it is under the umbrella of HL7 standards, it is specified in Extensible Markup Language, the semantics are drawn from the HL7 Reference Information Model, and the document specifications form an architecture that, in aggregate, define the semantics and structural constraints necessary for the exchange of clinical documents. The proposal is a work in progress and has not yet been submitted to HL7's formal balloting process.
Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie
Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.
Alastair Charles Smith
Full Text Available Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behaviour and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language mediated eye gaze.
Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.
Diamond, Steven; Boyd, Stephen
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
Roberts, Megan Y.; Kaiser, Ann P.; Wolfe, Cathy E.; Bryant, Julie D.; Spidalieri, Alexandria M.
Purpose: In this study, the authors examined the effects of the Teach-Model-Coach-Review instructional approach on caregivers' use of four enhanced milieu teaching (EMT) language support strategies and on their children's use of expressive language. Method: Four caregiver-child dyads participated in a single-subject, multiple-baseline study.…
Kareva, Veronika; Echevarria, Jana
In this paper we present a comprehensive model of instruction for providing consistent, high quality teaching to L2 students. This model, the SIOP Model (Sheltered Instruction Observation Protocol), provides an explicit framework for organizing instructional practices to optimize the effectiveness of teaching second and foreign language learners.…
Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan
of this work is to provide a domain-specific language for generic models and an instantiator tool taking not only configuration data but also a generic model as input instead of using a hard-coded generator for instantiating only one fixed generic model and its properties with configuration data....
The main aim of the present paper is to introduce a model for digital game categorization suitable for use in English language learning studies: the Scale of Social Interaction (SSI) Model (original idea published as Sundqvist, 2013). The SSI Model proposes a classification of commercial off-the-shelf (COTS) digital games into three categories:…
During the past ten years, research on second language acquisition (SLA) has expanded; at the same time, different models and hypotheses have been proposed to explain and account for the processes underlying SLA. Four models seem to be dominant at the present time: (1) the monitor model, which distinguishes between implicit or unconscious language…
Haddad, Peiman; Cheung, Fred; Pond, Gregory; Easton, Debbie; Cops, Frederick; Bezjak, Andrea; McLean, Michael; Levin, Wilfred; Billingsley, Susan; Williams, Diane; Wong, Rebecca
Purpose To evaluate the impact of computed tomographic (CT) planning in comparison to clinical mark-up (CM) for palliative radiation of chest wall metastases. Methods and Materials In patients treated with CM for chest wall bone metastases (without conventional simulation/fluoroscopy), two consecutive planning CT scans were acquired with and without an external marker to delineate the CM treatment field. The two sets of scans were fused for evaluation of clinical tumor volume (CTV) coverage by the CM technique. Under-coverage was defined as the proportion of CTV not covered by the CM 80% isodose. Results Twenty-one treatments (ribs 17, sternum 2, and scapula 2) formed the basis of our study. Due to technical reasons, comparable data between CM and CT plans were available for 19 treatments only. CM resulted in a mean CTV under-coverage of 36%. Eleven sites (58%) had an under-coverage of >20%. Mean volume of normal tissues receiving ≥80% of the dose was 5.4% in CM and 9.3% in CT plans (p = 0.017). Based on dose-volume histogram comparisons, CT planning resulted in a change of treatment technique from direct apposition to a tangential pair in 7 of 19 cases. Conclusions CT planning demonstrated a 36% under-coverage of CTV with CM of ribs and chest wall metastases
Full Text Available In order to build a secure computing environment, persons responsible for data security need tools which allow them to test the security of data being protected. Research of passwords, used in usual computing environments, showed that easy to remember non-dictionary passwords are widely used. So it should be useful to build a statistical model,which can then be used to create very effective password lists for testing the security of a given protected data object. The problem is that the society from specified location is using also foreign words,from languages widely used. This article describes a comparison of different language models used for this new statistical candidates generation method. This generator could be then used to test the strength of passwords used to protect wireless networks which useWPA-PSK as its data encryption standard. The password candidates passed to tools which perform the security audit. This method could be described also as sorting of Brute-force password candidates usingknowledge about languages used by the users. The tests showed that using combination of language models (MIX of specified language group for the password candidates’ generator could improve thespeed of the security procedure by 37% relatively in average (60% speedup when finding 50% of passwords – in 0.69% vs 1.715% of Bruteforce combinations comparing to mother language model (SK and 20 times average absolute speedup comparing to Bruteforce.
Đolić Slobodanka R.
Full Text Available This article will discuss one possible model of teaching English as a foreign language with intercultural requirements, environmental psychological influences and active and genuine participation of learners as issues that help develop learning skills to negotiate meanings across languages and cultures. Environmental conditions are considered central to developing teaching and learning abilities. This discussion is based on two theoretical concepts: intercultural communicative competence (Byram and the intercultural speaker (Kramsch. English, a global language of this universe, has strength to unite the peoples of the Earth and provide better conditions for progressive and more profitable living and working environment. .
Glenda A. Gunter
Full Text Available Abstract: Combining games with mobile devices can promote learning opportunities at the learners' fingertips and enable ubiquitous learning experiences. As teachers increasingly assign games to reinforce language learning, it becomes essential to evaluate how effective these applications are in helping students learn the content or develop the skills that the games are reinforcing. This article examines two English language learning apps under the RETAIN model (GUNTER; KENNY; VICK, 2008. The findings indicate that although these apps offer some language learning opportunities, they do not present scenario-based quality or gameplay, among other elements, if they are to be considered games.
Shirai, M. [Central Research Institute of Electric Power Industry, Tokyo (Japan)
This paper introduces the new estimation method of a mark-up rate (ratio of cost and limited expenses) for each industry reported by Nishimura and Shirai (in 1998) and the result of application of it to the industry in Japan. The problem (generation of short-term fixed cost or establishment of short-term economy) that occurs as a barrier in the application of a primary and homogeneous neoclassical production function is solved by assuming the short-term fixity of production organization. Moreover, the compatibility with study (indicating that the production function becomes a primary and homogeneous function if the short-term adjustment of production organization is completed) by Basu can also be maintained. A short-term production function obtained when this assumption is formulated is introduced in this case. Next, a regression expression for estimating the mark-up rate using a short-term production function is given. The mark-up rate was estimated for 22 types of industries by using the data between 1965 and 1995. The result shows that the industry to which estimation can be applied is significantly in the imperfect competition state. 13 refs., 1 fig. 1 tab.
Inigo San Gil; Wade Sheldon; Tom Schmidt; Mark Servilla; Raul Aguilar; Corinna Gries; Tanya Gray; Dawn Field; James Cole; Jerry Yun Pan; Giri Palanisamy; Donald Henshaw; Margaret O' Brien; Linda Kinkel; Kathrine McMahon; Renzo Kottmann; Linda Amaral-Zettler; John Hobbie; Philip Goldstein; Robert P. Guralnick; James Brunt; William K. Michener
The Genomic Standards Consortium (GSC) invited a representative of the Long-Term Ecological Research (LTER) to its fifth workshop to present the Ecological Metadata Language (EML) metadata standard and its relationship to the Minimum Information about a Genome/Metagenome Sequence (MIGS/MIMS) and its implementation, the Genomic Contextual Data Markup Language (GCDML)....
Full Text Available The paper examines one of the possible approaches to exploring the conceptual space represented by language signs and texts. The notion of the cognitheme as a unit of knowledge in the form of a proposition, functional for modelling the conceptual space, is defined and some principles of the cognitheme analysis are discussed. The cognitheme is considered as a unit of modelling mental entities reflected in the language, for example, such as the concept or the conceptual space connected with a text, and at the same time as a unit of conceptualization significant in its own right, revealing elements of knowledge important for a language community and thus fixed in language signs and texts. A feasible classification of cognithemes is described, examples illustrating this classification are given.
Al Tiyb Al Khaiyali
Full Text Available Reading comprehension instruction is considered one of the major challenges that most English language teachers and students encounter. Therefore, providing a systematic, explicit, and flexible model to teaching reading comprehension strategies could help resolve some of these challenges and increase the possibility of teaching reading comprehension, particularly in language learners’ classrooms. Consequently, the purpose of this paper is to provide a model to teach reading comprehension strategies in language learning classrooms. The proposed instructional model is divided into three systematic phases through which strategies are taught before reading, during reading, and after reading. Each phase is explained and elaborated using recommended models for teachers. Finally, suggested considerations to consolidate this model are provided.
Doshi, Finale; Roy, Nicholas
Spoken language is one of the most intuitive forms of interaction between humans and agents. Unfortunately, agents that interact with people using natural language often experience communication errors and do not correctly understand the user's intentions. Recent systems have successfully used probabilistic models of speech, language and user behaviour to generate robust dialogue performance in the presence of noisy speech recognition and ambiguous language choices, but decisions made using these probabilistic models are still prone to errors owing to the complexity of acquiring and maintaining a complete model of human language and behaviour. In this paper, a decision-theoretic model for human-robot interaction using natural language is described. The algorithm is based on the Partially Observable Markov Decision Process (POMDP), which allows agents to choose actions that are robust not only to uncertainty from noisy or ambiguous speech recognition but also unknown user models. Like most dialogue systems, a POMDP is defined by a large number of parameters that may be difficult to specify a priori from domain knowledge, and learning these parameters from the user may require an unacceptably long training period. An extension to the POMDP model is described that allows the agent to acquire a linguistic model of the user online, including new vocabulary and word choice preferences. The approach not only avoids a training period of constant questioning as the agent learns, but also allows the agent actively to query for additional information when its uncertainty suggests a high risk of mistakes. The approach is demonstrated both in simulation and on a natural language interaction system for a robotic wheelchair application.
Roberts, Megan Y; Kaiser, Ann P; Wolfe, Cathy E; Bryant, Julie D; Spidalieri, Alexandria M
In this study, the authors examined the effects of the Teach-Model-Coach-Review instructional approach on caregivers' use of four enhanced milieu teaching (EMT) language support strategies and on their children's use of expressive language. Four caregiver-child dyads participated in a single-subject, multiple-baseline study. Children were between 24 and 42 months of age and had language impairment. Interventionists used the Teach-Model-Coach-Review instructional approach to teach caregivers to use matched turns, expansions, time delays, and milieu teaching prompts during 24 individualized clinic sessions. Caregiver use of each EMT language support strategy and child use of communication targets were the dependent variables. The caregivers demonstrated increases in their use of each EMT language support strategy after instruction. Generalization and maintenance of strategy use to the home was limited, indicating that teaching across routines is necessary to achieve maximal outcomes. All children demonstrated gains in their use of communication targets and in their performance on norm-referenced measures of language. The results indicate that the Teach-Model-Coach-Review instructional approach resulted in increased use of EMT language support strategies by caregivers. Caregiver use of these strategies was associated with positive changes in child language skills.
Metcalf, Chris; Lewis, Grace A
.... The OWL Web Ontology Language for Services (OWL-S) is a language to describe the properties and capabilities of Web Services in such a way that the descriptions can be interpreted by a computer system in an automated manner. This technical note presents the results of applying the model problem approach to examine the feasibility of using OWL-S to allow applications to automatically discover, compose, and invoke services in a dynamic services-oriented environment.
Full Text Available This paper presents an open source implementation1 of a neural language model for machine translation. Neural language models deal with the problem of data sparsity by learning distributed representations for words in a continuous vector space. The language modelling probabilities are estimated by projecting a word's context in the same space as the word representations and by assigning probabilities proportional to the distance between the words and the context's projection. Neural language models are notoriously slow to train and test. Our framework is designed with scalability in mind and provides two optional techniques for reducing the computational cost: the so-called class decomposition trick and a training algorithm based on noise contrastive estimation. Our models may be extended to incorporate direct n-gram features to learn weights for every n-gram in the training data. Our framework comes with wrappers for the cdec and Moses translation toolkits, allowing our language models to be incorporated as normalized features in their decoders (inside the beam search.
Brown, Ramsay A; Swanson, Larry W
Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.
Lefever, S.; Hellemans, J.; Pattyn, F.; Przybylski, D.R.; Taylor, C.; Geurts, R.; Untergasser, A.; Vandesompele, J.
The XML-based Real-Time PCR Data Markup Language (RDML) has been developed by the RDML consortium (http://www.rdml.org) to enable straightforward exchange of qPCR data and related information between qPCR instruments and third party data analysis software, between colleagues and collaborators and
Shindyalov, I N; Chang, W; Pu, C; Bourne, P E
Macromolecular query language (MMQL) is an extensible interpretive language in which to pose questions concerning the experimental or derived features of the 3-D structure of biological macromolecules. MMQL portends to be intuitive with a simple syntax, so that from a user's perspective complex queries are easily written. A number of basic queries and a more complex query--determination of structures containing a five-strand Greek key motif--are presented to illustrate the strengths and weaknesses of the language. The predominant features of MMQL are a filter and pattern grammar which are combined to express a wide range of interesting biological queries. Filters permit the selection of object attributes, for example, compound name and resolution, whereas the patterns currently implemented query primary sequence, close contacts, hydrogen bonding, secondary structure, conformation and amino acid properties (volume, polarity, isoelectric point, hydrophobicity and different forms of exposure). MMQL queries are processed by MMQLlib; a C++ class library, to which new query methods and pattern types are easily added. The prototype implementation described uses PDBlib, another C(++)-based class library from representing the features of biological macromolecules at the level of detail parsable from a PDB file. Since PDBlib can represent data stored in relational and object-oriented databases, as well as PDB files, once these data are loaded they too can be queried by MMQL. Performance metrics are given for queries of PDB files for which all derived data are calculated at run time and compared to a preliminary version of OOPDB, a prototype object-oriented database with a schema based on a persistent version of PDBlib which offers more efficient data access and the potential to maintain derived information. MMQLlib, PDBlib and associated software are available via anonymous ftp from cuhhca.hhmi.columbia.edu.
Full Text Available The article presents a new model of a linguistic educational process that can be implemented in the practice of teaching a foreign language in a technical university. The proposed model takes into account the characteristic features of mindset of students of technical universities and faculties, and it constitutes a matrix with a binary opposition. Filled-in matrix cells represent a structure of the language knowledge content in a visual form. Knowledge of the system organization of a language helps the students to understand "language in action" in the way that corresponds to their left hemisphere mindset. The knowledge of the dominant hemisphere cerebration peculiarities of the students of technical specializations (engineering physicists lets us model a lingvo-educational process in a non-linguistic university. A complex linking of lingvo-didactic components makes the teachers of foreign language take into consideration the results of the research in the field of functional interhemispheric asymmetry of the brain. The emphasis on the abilities of the left hemisphere dominating among the students has to change the approach of the teachers of foreign languages to the organization of the linguistic educational process in a technical university. It is also important to consider that the skills which led the life in the information age remain necessary, but they alone are no longer sufficient for personal self-realization in the new conceptual age.
Lê, Lam Son; Wegmann, Alain
In enterprise architecture, the goal is to integrate business resources and IT resources in order to improve an enterprises competitiveness. In an enterprise architecture project, the development team usually constructs a model that represents the enterprise: the enterprise model. In this paper, we present a modeling language for building such enterprise models. Our enterprise models are hierarchical object-oriented representations of the enterprises. This paper presents the foundations of o...
Gómez, Harold F; Hucka, Michael; Keating, Sarah M; Nudelman, German; Iber, Dagmar; Sealfon, Stuart C
MATLAB is popular in biological research for creating and simulating models that use ordinary differential equations (ODEs). However, sharing or using these models outside of MATLAB is often problematic. A community standard such as Systems Biology Markup Language (SBML) can serve as a neutral exchange format, but translating models from MATLAB to SBML can be challenging-especially for legacy models not written with translation in mind. We developed MOCCASIN (Model ODE Converter for Creating Automated SBML INteroperability) to help. MOCCASIN can convert ODE-based MATLAB models of biochemical reaction networks into the SBML format. MOCCASIN is available under the terms of the LGPL 2.1 license (http://www.gnu.org/licenses/lgpl-2.1.html). Source code, binaries and test cases can be freely obtained from https://github.com/sbmlteam/moccasin : email@example.com More information is available at https://github.com/sbmlteam/moccasin. © The Author 2016. Published by Oxford University Press.
Poetry is the art shaped through language; to talk about a poem we need at least to talk about its language--but what can be said will depend on the particular linguistic theory, with its particular modelling of language, which we bring to the description. This paper outlines the approach of SFL (Systemic Functional Linguistics), describing in…
Crook, Sharon M; Howell, Fred W
EXtensible Markup Language (XML) technology provides an ideal representation for the complex structure of models and neuroscience data, as it is an open file format and provides a language-independent method for storing arbitrarily complex structured information. XML is composed of text and tags that explicitly describe the structure and semantics of the content of the document. In this chapter, we describe some of the common uses of XML in neuroscience, with case studies in representing neuroscience data and defining model descriptions based on examples from NeuroML. The specific methods that we discuss include (1) reading and writing XML from applications, (2) exporting XML from databases, (3) using XML standards to represent neuronal morphology data, (4) using XML to represent experimental metadata, and (5) creating new XML specifications for models.
Full Text Available The aim of the present study is developing English learning model to increase students’ language skills in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL implementation. This study is designed in Research and Development (R & D using ADDIE model development. The researcher collected the data through the test, questionnaire, observation, and interview which were then analyzed qualitatively and quantitatively. The study revealed that Directed-Project Based Learning (DPjBL implementation is significantly able to be one learning model allowing to increase student’s language skills.
Oken, Barry S; Orhan, Umut; Roark, Brian; Erdogmus, Deniz; Fowler, Andrew; Mooney, Aimee; Peters, Betts; Miller, Meghan; Fried-Oken, Melanie B
Some noninvasive brain-computer interface (BCI) systems are currently available for locked-in syndrome (LIS) but none have incorporated a statistical language model during text generation. To begin to address the communication needs of individuals with LIS using a noninvasive BCI that involves rapid serial visual presentation (RSVP) of symbols and a unique classifier with electroencephalography (EEG) and language model fusion. The RSVP Keyboard was developed with several unique features. Individual letters are presented at 2.5 per second. Computer classification of letters as targets or nontargets based on EEG is performed using machine learning that incorporates a language model for letter prediction via Bayesian fusion enabling targets to be presented only 1 to 4 times. Nine participants with LIS and 9 healthy controls were enrolled. After screening, subjects first calibrated the system, and then completed a series of balanced word generation mastery tasks that were designed with 5 incremental levels of difficulty, which increased by selecting phrases for which the utility of the language model decreased naturally. Six participants with LIS and 9 controls completed the experiment. All LIS participants successfully mastered spelling at level 1 and one subject achieved level 5. Six of 9 control participants achieved level 5. Individuals who have incomplete LIS may benefit from an EEG-based BCI system, which relies on EEG classification and a statistical language model. Steps to further improve the system are discussed.
Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.
An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.
Lee, Yung-Tsun Tina
This report documents a journey "from research to an approved standard" of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together.
Pinto, Giuliana; Bigozzi, Lucia; Gamannossi, Beatrice Accorti; Vezzani, Claudio
The aim of the present study is twofold: (1) contribute to identifying a model for the variables that compose the emergent literacy construct and their relationships; (2) assess the predictive power of the emergent literacy model on early writing abilities in a transparent orthography language. We examined emergent literacy skills in 464 children…
The aim of this paper is to describe a blended learning model to be used in Egyptian schools when teaching reading classes in English as a foreign language. This paper is divided into three parts. The first part outlines the Egyptian context and describes the target learners. The second part describes the suggested blended learning model, which is…
With the coming of the information age, computer-based teaching model has had an important impact on English teaching. Since 2004, the trial instruction on Network-assisted Language Teaching (NALT) Model integrating the English instruction and computer technology has been launched at some universities in China, including China university of…
This article introduces two sophisticated statistical modeling techniques that allow researchers to analyze systematicity, individual variation, and nonlinearity in second language (L2) development. Generalized linear mixed-effects models can be used to quantify individual variation and examine systematic effects simultaneously, and generalized…
Phillips, Lawrence; Pearl, Lisa
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported
Pitrelli, J.F.; Ratzlaf, E.H.
We describe experiments varying the degree of languagemodel constraint applied to writerindependent online handwriting recognition. Six types of models are used, varying statistical components and hard constraints which govern recognition search during the sequencing of characters to form valid
Baylor, Carolyn; Hula, William; Donovan, Neila J.; Doyle, Patrick J.; Kendall, Diane; Yorkston, Kathryn
Purpose: To present a primarily conceptual introduction to item response theory (IRT) and Rasch models for speech-language pathologists (SLPs). Method: This tutorial introduces SLPs to basic concepts and terminology related to IRT as well as the most common IRT models. The article then continues with an overview of how instruments are developed…
This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…
Aysolmaz, Banu; Leopold, Henrik; Reijers, Hajo A.; Demirörs, Onur
Context: The analysis of requirements for business-related software systems is often supported by using business process models. However, the final requirements are typically still specified in natural language. This means that the knowledge captured in process models must be consistently
Full Text Available This paper aims at reviewing the most relevant linguistic applications developed in the intersection between three different fields: machine learning, formal language theory and agent technologies. On the one hand, we present some of the main linguistic contributions of the intersection between machine learning and formal languages, which constitutes a well-established research area known as Grammatical Inference. On the other hand, we present an overview of the main linguistic applications of models developed in the intersection between agent technologies and formal languages, such as colonies, grammar systems and eco-grammar systems. Our goal is to show how interdisciplinary research between these three fields can contribute to better understand how natural language is acquired and processed.
Carro, Adrián; Toral, Raúl; Miguel, Maxi San
Inspired by language competition processes, we present a model of coupled evolution of node and link states. In particular, we focus on the interplay between the use of a language and the preference or attitude of the speakers towards it, which we model, respectively, as a property of the interactions between speakers (a link state) and as a property of the speakers themselves (a node state). Furthermore, we restrict our attention to the case of two socially equivalent languages and to socially inspired network topologies based on a mechanism of triadic closure. As opposed to most of the previous literature, where language extinction is an inevitable outcome of the dynamics, we find a broad range of possible asymptotic configurations, which we classify as: frozen extinction states, frozen coexistence states, and dynamically trapped coexistence states. Moreover, metastable coexistence states with very long survival times and displaying a non-trivial dynamics are found to be abundant. Interestingly, a system size scaling analysis shows, on the one hand, that the probability of language extinction vanishes exponentially for increasing system sizes and, on the other hand, that the time scale of survival of the non-trivial dynamical metastable states increases linearly with the size of the system. Thus, non-trivial dynamical coexistence is the only possible outcome for large enough systems. Finally, we show how this coexistence is characterized by one of the languages becoming clearly predominant while the other one becomes increasingly confined to ‘ghetto-like’ structures: small groups of bilingual speakers arranged in triangles, with a strong preference for the minority language, and using it for their intra-group interactions while they switch to the predominant language for communications with the rest of the population. (paper)
Acher , Mathieu; Heymans , Patrick; Collet , Philippe; Lahire , Philippe
International audience; Variability modelling and management is a key activity in a growing number of software engineering contexts, from software product lines to dynamic adaptive systems. Feature models are the defacto standard to formally represent and reason about commonality and variability of a software system. This tutorial aims at presenting next generation of feature modelling languages and tools, directly applicable to a wide range of model-based variability problems and application...
Catani, Marco; Bambini, Valentina
In humans, brain connectivity implements a system for language and communication that spans from basic pre-linguistic social abilities shared with non-human primates to syntactic and pragmatic functions particular to our species. The arcuate fasciculus is a central connection in this architecture, linking regions devoted to formal aspects of language with regions involved in intentional and social communication. Here, we outline a new anatomical model of communication that incorporates previous neurofunctional accounts of language with recent advances in tractography and neuropragmatics. The model consists of five levels, from the representation of informative actions and communicative intentions, to lexical/semantic processing, syntactic analysis, and pragmatic integration. The structure of the model is hierarchical in relation to developmental and evolutionary trajectories and it may help interpreting clinico-anatomical correlation in communication disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.
Manwaring, Stacy S; Mead, Danielle L; Swineford, Lauren; Thurm, Audrey
Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. To examine the relationships between gesture, fine motor and language in young children with ASD compared with a comparison group using multiple measures and methods in a structural equation modelling framework. Participants included 110 children with ASD and a non-ASD comparison group of 87 children (that included children with developmental delays (DD) or typical development (TD)), from 12 to 48 months of age. A construct of gesture use as measured by the Communication and Symbolic Behavior Scales-Developmental Profile Caregiver Questionnaire (CQ) and the Autism Diagnostic Observation Schedule (ADOS), as well as fine motor from the Mullen Scales of Early Learning and Vineland Adaptive Behavior Scales-II (VABS-II) was examined using second-order confirmatory factor analysis (CFA). A series of structural equation models then examined concurrent relationships between the aforementioned latent gesture construct and expressive and receptive language. A series of hierarchical regression analyses was run in a subsample of 36 children with ASD with longitudinal data to determine how gesture factor scores predicted later language outcomes. Across study groups, the gesture CFA model with indicators of gesture use from both the CQ (parent-reported) and ADOS (direct observation), and measures of fine motor provided good fit with all indicators significantly and strongly loading onto one gesture factor. This model of gesture use, controlling for age, was found to correlate strongly with concurrent expressive and receptive language. The correlations between gestures and concurrent language were similar in magnitude in both the ASD and non-ASD groups. In the longitudinal subsample of children with ASD, gestures at time 1 predicted later receptive (but not
Sun, Yudong; McKeever, Steve
Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.
According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.
The demands on the early language teacher and the limited lesson time seem to be some of the reasons for the lack of learners' engagement in interpersonal communication in early language programs. Although the research on the role of the classroom teacher in early language programs is scarce, there is evidence that the classroom teacher can play a…
Tremblay, Pascale; Dick, Anthony Steven
With the advancement of cognitive neuroscience and neuropsychological research, the field of language neurobiology is at a cross-roads with respect to its framing theories. The central thesis of this article is that the major historical framing model, the Classic "Wernicke-Lichtheim-Geschwind" model, and associated terminology, is no longer adequate for contemporary investigations into the neurobiology of language. We argue that the Classic model (1) is based on an outdated brain anatomy; (2) does not adequately represent the distributed connectivity relevant for language, (3) offers a modular and "language centric" perspective, and (4) focuses on cortical structures, for the most part leaving out subcortical regions and relevant connections. To make our case, we discuss the issue of anatomical specificity with a focus on the contemporary usage of the terms "Broca's and Wernicke's area", including results of a survey that was conducted within the language neurobiology community. We demonstrate that there is no consistent anatomical definition of "Broca's and Wernicke's Areas", and propose to replace these terms with more precise anatomical definitions. We illustrate the distributed nature of the language connectome, which extends far beyond the single-pathway notion of arcuate fasciculus connectivity established in Geschwind's version of the Classic Model. By illustrating the definitional confusion surrounding "Broca's and Wernicke's areas", and by illustrating the difficulty integrating the emerging literature on perisylvian white matter connectivity into this model, we hope to expose the limits of the model, argue for its obsolescence, and suggest a path forward in defining a replacement. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.
Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for
Full Text Available This literature review article approaches the topic of information and communications technologies from the perspective of their impact on the language learning process, with particular emphasis on the most appropriate designs of multimodal texts as informed by models of multimodal learning. The first part contextualizes multimodality within the fields of discourse studies, the psychology of learning and CALL; the second, deals with multimodal conceptions of reading and writing by discussing hypertextuality and literacy. A final section outlines the possible implications of multimodal learning models for foreign language teaching and learning.
Full Text Available For several decades, a wide-spread consensus concerning the enormous importance of an in-depth clarification of the specifications of a product has been observed. A weak clarification of specifications is repeatedly listed as a main cause for the failure of product development projects. Requirements, which can be defined as the purpose, goals, constraints, and criteria associated with a product development project, play a central role in the clarification of specifications. The collection of activities which ensure that requirements are identified, documented, maintained, communicated, and traced throughout the life cycle of a system, product, or service can be referred to as “requirements engineering”. These activities can be supported by a collection and combination of strategies, methods, and tools which are appropriate for the clarification of specifications. Numerous publications describe the strategy and the components of requirements management. Furthermore, recent research investigates its industrial application. Simultaneously, promising developments of graph-based design languages for a holistic digital representation of the product life cycle are presented. Current developments realize graph-based languages by the diagrams of the Unified Modelling Language (UML, and allow the automatic generation and evaluation of multiple product variants. The research presented in this paper seeks to present a method in order to combine the advantages of a conscious requirements management process and graph-based design languages. Consequently, the main objective of this paper is the investigation of a model-based integration of requirements in a product development process by means of graph-based design languages. The research method is based on an in-depth analysis of an exemplary industrial product development, a gear system for so-called “Electrical Multiple Units” (EMU. Important requirements were abstracted from a gear system
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.
von Davier, Matthias
Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.
Bae, Kyoung-Il; Kim, Jung-Hyun; Huh, Soon-Young
Discusses process information sharing among participating organizations in a virtual enterprise and proposes a federated process framework and system architecture that provide a conceptual design for effective implementation of process information sharing supporting the autonomy and agility of the organizations. Develops the framework using an…
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.
Pearl, Lisa S; Sprouse, Jon
Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.
Harmelen, van F.A.H.; Balder, J.
We present (ML)2, a formal language for the representation of KADS models of expertise. (ML)2 is a combination of first order predicate logic (for the declarative representation of domain knowledge), meta-logic (for the representation of how to use the domain knowledge) and dynamic logic (for the
Promnont, Piyapong; Rattanavich, Saowalak
The research is aimed to study the development of eleventh grade students' reading, creative writing abilities, satisfaction taught through the concentrated language encounter instruction method, CLE model III. One experimental group time series design was used, and the data was analyzed by MANOVA with repeated measures, t-test for one-group…
Abdelshaheed, Bothina S. M.
This study aims at investigating the effect of using Flipped Learning Model in teaching English language among female English majors in Majmaah University on their achievement in two different English courses and identifying their feelings and satisfaction about flipping their classes. The study used a pre-post test design and included two…
Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.
This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,
Bonaiuto, James J.; Bornkessel-Schlesewsky, Ina; Kemmerer, David; MacWhinney, Brian; Nielsen, Finn Årup; Oztop, Erhan
We assess the challenges of studying action and language mechanisms in the brain, both singly and in relation to each other to provide a novel perspective on neuroinformatics, integrating the development of databases for encoding – separately or together – neurocomputational models and empirical data that serve systems and cognitive neuroscience. PMID:24234916
Arbib, Michael A.; Bonaiuto, James J.; Bornkessel-Schlesewsky, Ina
We assess the challenges of studying action and language mechanisms in the brain, both singly and in relation to each other to provide a novel perspective on neuroinformatics, integrating the development of databases for encoding - separately or together - neurocomputational models and empirical ...
Burtis, M.D. [comp.] [Oak Ridge National Lab., TN (United States). Carbon Dioxide Information Analysis Center; Razuvaev, V.N.; Sivachok, S.G. [All-Russian Research Inst. of Hydrometeorological Information--World Data Center, Obninsk (Russian Federation)
This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
MacKinnon, Teresa; Pasfield-Neofitou, Sarah
Language education faculty face myriad challenges in finding teaching resources that are suitable, of high quality, and allow for the modifications needed to meet the requirements of their course contexts and their learners. The article elaborates the grassroots model of "produsage" (a portmanteau of "production" and…
Zendi, Asma; Bouhadada, Tahar; Bousbia, Nabila
Semiformal EMLs are developed to facilitate the adoption of educational modeling languages (EMLs) and to address practitioners' learning design concerns, such as reusability and readability. In this article, SDLD (Structure Dialogue Learning Design) is presented, which is a semiformal EML that aims to improve controllability of learning design…
Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric
This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the basis of relations between source and target metamodels and on the base of generic transformation
Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.
This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,
Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric
This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the base of the relations between source and target metamodels and on the base of generic transformation
Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying
The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…
Kalikow, Daniel N.
The report outlines the administrative setting and describes the experimental design to be used in field testing the Mark II model of the Automated Pronunciation Instructor (API) system. It presents the draft instructional curriculum for the Spanish-English and the English-Mandarin Chinese language pairs, and describes the hardware, pedagogical…
Cruz, Maria del C.; Ayala, Myrna
Case studies of eight children with speech and language impairments are presented in a review of the intervention efforts at the Demonstration Center for Preschool Special Education (DCPSE) in Puerto Rico. Five components of the intervention model are examined: social medical history, intelligence, motor development, socio-emotional development,…
Ullman, Michael T.; Lovelett, Jarrett T.
The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…
Virtual Reality Modelling Language (VRML) is description language, which belongs to a field Window on World virtual reality system. The file, which is in VRML format, can be interpreted by VRML explorer in three-dimensional scene. VRML was created with aim to represent virtual reality on Internet easier. Development of 3D graphic is connected with Silicon Graphic Corporation. VRML 2.0 is the file format for describing interactive 3D scenes and objects. It can be used in collaboration with www...
Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.
In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ) pr...
Valverde, Sergi; Solé, Ricard V
Our interaction with complex computing machines is mediated by programming languages (PLs), which constitute one of the major innovations in the evolution of technology. PLs allow flexible, scalable, and fast use of hardware and are largely responsible for shaping the history of information technology since the rise of computers in the 1950s. The rapid growth and impact of computers were followed closely by the development of PLs. As occurs with natural, human languages, PLs have emerged and gone extinct. There has been always a diversity of coexisting PLs that compete somewhat while occupying special niches. Here we show that the statistical patterns of language adoption, rise, and fall can be accounted for by a simple model in which a set of programmers can use several PLs, decide to use existing PLs used by other programmers, or decide not to use them. Our results highlight the influence of strong communities of practice in the diffusion of PL innovations.
Yiboe Kofi Tsivanyo
Full Text Available This paper outlines some methodological challenges in investigating communicative models of teachers and students in French language classroom in some Senior High Schools in the Cape Coast metropolis - Ghana. The data collection procedure for this study focused on natural setting, use of objective views on the Ghanaian belief systems in the investigation process in order to structure the research and to avoid manipulating the study variables. The database consisted of classroom activities as well as extensive interviews with some old students on a year abroad linguistic programme in University of Strasbourg, France. The results showed that language usage in the French classroom was controlled by teachers. However, strategies used by teachers could contribute to effective language teaching if cultural dimensions were taken into consideration.
While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions. Copyright © 2009 Cognitive Science Society, Inc.
Lopopolo, Alessandro; Frank, Stefan L; van den Bosch, Antal; Willems, Roel M
Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.
Full Text Available Introduction: It is known that an intact cortical left hemisphere is crucial for language production. Recently, more credit is given to the right hemisphere and subcortical areas in the production of non-novel language, including formulaic language. John Hughlings Jackson (1874/1958, first described how propositional and non-propositional speech are differentially affected by neural impairment. Non-propositional language is often preserved following left hemisphere stroke even when aphasia is present (Code, 1982; Sidtis et al., 2009; Van Lancker Sidtis & Postman, 2006. With right hemisphere and subcortical stroke, formulaic language is reduced (Sidtis et al., 2009; Van Lancker Sidtis & Postman, 2006; Speedie et al., 1993. The dual process model of language competence states that propositional and non-propositional speech are processed differently in the brain, with novel speech controlled by the left hemisphere, and a right hemisphere/subcortical circuit modulating formulaic language (Van Lancker Sidtis, 2004; 2012. Two studies of formulaic language will be presented as further evidence of the dual process model: a study of formulaic language in Alzheimer’s disease, and a study of recited speech in Parkinson’s disease. Formulaic language includes overlearned words, phrases or longer linguistic units that are known to the native speaker, occur naturally in discourse, and are important for normal social interaction (Fillmore, 1979; Pawley & Syder, 1983; Van Lancker, 1988; Van Lancker Sidtis, 2004; Wray, 2002. Formulaic expressions include conversational speech formulas, idioms, proverbs, expletives, pause fillers, discourse elements, and sentence stems (stereotyped sentence-initials. Longer units of linguistic material, such as prayers, rhymes, and poems, termed recited speech, is another subtype of formulaic language that is learned in childhood and recited periodically throughout life. Cortical disease: Alzheimer’s disease and formulaic
This contribution argues for the proposition that formal models based on the theory of formal grammars and languages are adequate for the study of some computationally relevant properties of agents and multi-agent systems. Some questions are formulated concerning the possibilities to enlarge the universality and realism of such models by considering the possibilities to go with their computing abilities beyond the traditional Turing-computability, and by considering very natural properties of...
Full Text Available gained a significant following because of its utility in producing predictions for these quantities. These models al- low developers to employ the methods of pattern recognition to compute numerical targets for the fundamental frequency and amplitude... hand, depends on the dialect of the speaker (which in turn depends on factors such as the region where the speakers grew up and currently reside, possibly their ages and socio-economic environment, etc.) To address these issues, we have chosen...
This replication study tested MacIntyre's Social Psychological Model of Strategy Use. Participants were 137 first-year college students (100 men and 37 women), all in their late teens or early 20s, learning English as a foreign language in a university in Taiwan. McIntyre specified three conditions for use of language-learning strategies in his model: awareness of the strategy, having a reason to use it, and not having a reason not to use it. Stepwise multiple regression analyses of data measured by Oxford's 50-item Strategy Inventory for Language Learning partially support this model because only Knowledge about the Strategy (representing the first condition) and Difficulty about Using It (representing the third condition) made significant independent contributions to the prediction of use of most of the 50 strategies. Close examination of the results poses questions about MacIntyre and Noels' thesis, as implied in their revised model, that reason to use the strategy and reason not to use the strategy are independent. The present replication suggests a need for further revision of the model. Use of methods more advanced than the multiple regression is recommended to test and refine the model.
Bai, Hao; Zhang, Xi-wen
While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.
Vignollet, Laurence; Martel, Christian; Burgos, Daniel
A few eLearning research teams promoting a scenario-based approach have adopted the IMS-LD specification, At the same time, other teams have developed other notations, languages and meta-models related to IMS-LD, along with tools and methodologies for modelling and implementing learning activities on eLearning platforms.The aim of this special issue is to share and confront approaches (i.e., models, tools and methodologies) through modelling experiences of collaborative learning activities. T...
Full Text Available Language models (LMs are an essential element in statistical approaches to natural language processing for tasks such as speech recognition and machine translation (MT. The advent of big data leads to the availability of massive amounts of data to build LMs, and in fact, for the most prominent languages, using current techniques and hardware, it is not feasible to train LMs with all the data available nowadays. At the same time, it has been shown that the more data is used for a LM the better the performance, e.g. for MT, without any indication yet of reaching a plateau. This paper presents CloudLM, an open-source cloud-based LM intended for MT, which allows to query distributed LMs. CloudLM relies on Apache Solr and provides the functionality of state-of-the-art language modelling (it builds upon KenLM, while allowing to query massive LMs (as the use of local memory is drastically reduced, at the expense of slower decoding speed.
Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian
One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;
a discussion about the structure of hypertext expression languages. The operations have been chosen in agreement with what is strongly suggested, but not defined, by the HTML-standard. The HTEL-interpreter can be used for cgi-programs, i.e. to describe reactions when data from a `form' in an HTML-document has...... been submitted.A special tool has been used to build the HTEL-interpreter, as an example belonging a family of interpreters for domain specific languages. Members of that family have characteristics that are closely related to structural patterns found in the mark-ups of HTML. HTEL should also be seen......In general, an expression language provides a means to indicate non-constant values in expressions. It includes operations to combine values, but these will normally disappear when the expression is evaluated.HTEL is an expression language to produce HTML-documents. It is presented to stimulate...
In general, an expression language provides a means to indicate non-constant values in expressions. It includes operations to combine values, but these will normally disappear when the expression is evaluated.HTEL is an expression language to produce HTML-documents. It is presented to stimulate...... a discussion about the structure of hypertext expression languages. The operations have been chosen in agreement with what is strongly suggested, but not defined, by the HTML-standard. The HTEL-interpreter can be used for cgi-programs, i.e. to describe reactions when data from a `form' in an HTML-document has...... been submitted.A special tool has been used to build the HTEL-interpreter, as an example belonging a family of interpreters for domain specific languages. Members of that family have characteristics that are closely related to structural patterns found in the mark-ups of HTML. HTEL should also be seen...
Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.
Full Text Available Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008 in light of new behavioral and neural findings concerning the role of working memory capacity (WMC in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. A revised ELU model is proposed based on findings that address the relationship between WMC and (a early attention processes in listening to speech, (b signal processing in hearing aids and its effects on short-term memory, (c inhibition of speech maskers and its effect on episodic long-term memory, (d the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Patwari, Puneet, E-mail: firstname.lastname@example.org; Chaudhuri, Subhrojyoti Roy; Natarajan, Swaminathan; Muralikrishna, G
Highlights: • It is challenging to maintain consistency in the current approach to M&C design. • Based on similarity across various projects, it looks ideal to propose a solution at domain level. • Approach to create a DSL for M&C involves viewing a system through lenses of various domains. • M&CML provides a standard vocabulary and the entire process of M&C solution creation domain-aware. • M&CML provides a holistic view of control architecture. • M&CML has support for inherent consistency checks, user assistance and third party support. - Abstract: The use of System Engineering (SE) language such as SysML [1,20] is common within the community of control system designers. However the design handoff to the subsequent phases of the control system development is carried out manually in most cases without much tool support. The approach to agreeing on the control interface between components is a good example where engineers still rely on either manually created Interface Control Documents (ICD) or one off tools implemented by individual projects. Square Kilometer Array (SKA)  and International Thermonuclear Experimental Reactor (ITER)  are two good examples of such large projects adopting these approaches. This results in non-uniformity in the overall system design since individual groups invent their own vocabulary while using a language like SysML which leads to inconsistencies across the design, interface and realized code. To mitigate this, we propose the development of a Monitoring and Control Modeling Language (M&CML), a domain specific language (DSL) [4,22] for specifying M&C solutions. M&C ML starts with defining a vocabulary borrowing concepts from standard practices used in the control domain and incorporates a language which ensures uniformity and consistency across the M&C design, interfaces and implementation artifacts. In this paper we discuss this language with an analysis of its usage to point out its benefits.
Arbib's article  offers a sophisticated and convincing account of the evolution of human language that does not shy away from nailing together neurophysiology and the forms and functions of language. The core recognition of what language does, rather than just what language looks like or how its forms are generated, gives the model a high level of explanatory significance.
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
van Leussen, Jan-Willem; Escudero, Paola
We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.
Smith, Alastair C; Monaghan, Padraic; Huettig, Falk
Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates' eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing - the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013a,b). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Text Available The current status of research on working memory (WM and its components in second language acquisition (SLA was examined in this review. Literature search was done on four aspects using search terms in Google Scholar. Hence, the review results are given and introduced. 1. In the definition of WM, some confusion exists on whether short term memory (STM or recent memory is the same as WM or different. 2. In this review, three main models have been discussed elaborately, as they are the only ones discussed in literature. They are: multicomponent model of Baddeley (2000, embedded process model of Cowan (2005 and attention control model of Engle and Kane (2003. 3. The phonological and executive components of WM were examined in more detail, as these determine the two basic aspects of language acquisition: language characteristics and acquisition methods (Wen, 2012. Overall, the variables related to phonological and executive working memories are evident from published research, but their interactive relationships and affecting factors are not entirely clear. 4. Admittedly, several diverse internal and external factors affect WM in relation to SLA. Some practically useful interventions are indicated by certain findings.
Two schemata for refocusing language teacher education on teaching (rather than on such ancillary areas as applied linguistics or language acquisition) are proposed, including: (1) a descriptive model defining teaching as decision-making based on knowledge, skills, attitudes, and awareness; and (2) a related framework of two educating…
Marko, Inazio; Pikabea, Inaki
The aim of this study is to develop a reference model for intervention in the language processes applied to the transformation of language normalisation within organisations of a socio-economic nature. It is based on a case study of an experiment carried out over 10 years within a trade union confederation, and has pursued a strategy of a…
This paper will demonstrate how to enhance second language (L2) learners' linguistic and cultural competencies through the use of the Multiple Intelligences Film Teaching (MIFT) model. The paper will introduce two ideas to teachers of English as a Second/Foreign Language (ESL/EFL). First, the paper shows how L2 learners learn linguistic and…
DiNucci, David C.; Saini, Subhash (Technical Monitor)
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
Bizzotto, R; Smith, G; Yvon, F; Kristensen, NR; Swat, MJ
PharmML1 is an XML‐based exchange format2, 3, 4 created with a focus on nonlinear mixed‐effect (NLME) models used in pharmacometrics,5, 6 but providing a very general framework that also allows describing mathematical and statistical models such as single‐subject or nonlinear and multivariate regression models. This tutorial provides an overview of the structure of this language, brief suggestions on how to work with it, and use cases demonstrating its power and flexibility. PMID:28575551
Levinson, Stephen C.; Torreira, Francisco
The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioral data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks et al. (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviorally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next. PMID:26124727
Evers, Ken H.; Bachert, Robert F.
The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.
Theory 28 LEARNING STRATEGIES 29 Definition and Classification 29 Strategies as Cognitive Processes 31 A MODEL FOR RESEARCH 36 Research Issues...traditionally deal with a hero who sets out to accomplish a definite goal. By accomplishing the goal, he or she attains a reward, which is often material...that it can be used procedurally. This stage appears to correspond with what second language theorists term interlanguage . the not-yet-accurate use
Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments. Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.
Hockey, Beth Ann; Rfayner, Manny
This paper presents a methodologically sound comparison of the performance of grammar-based (GLM) and statistical-based (SLM) recognizer architectures using data from the Clarissa procedure navigator domain. The Regulus open source packages make this possible with a method for constructing a grammar-based language model by training on a corpus. We construct grammar-based and statistical language models from the same corpus for comparison, and find that the grammar-based language models provide better performance in this domain. The best SLM version has a semantic error rate of 9.6%, while the best GLM version has an error rate of 6.0%. Part of this advantage is accounted for by the superior WER and Sentence Error Rate (SER) of the GLM (WER 7.42% versus 6.27%, and SER 12.41% versus 9.79%). The rest is most likely accounted for by the fact that the GLM architecture is able to use logical-form-based features, which permit tighter integration of recognition and semantic interpretation.
Gross, Alexander; Murthy, Dhiraj
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Schlegel, Christian; Schultz, Ulrik Pagh; Stinckwich, Serge
Proceedings of the Third International Workshop on Domain-Specific Languages and Models for Robotic Systems (DSLRob'12), held at the 2012 International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR 2012), November 2012 in Tsukuba, Japan. The main topics...... of the workshop were Domain-Specific Languages (DSLs) and Model-driven Architecture (MDA) for robotics. A domain-specific language (DSL) is a programming language dedicated to a particular problem domain that offers specific notations and abstractions that increase programmer productivity within that domain....... Models-driven architecture (MDA) offers a high-level way for domain users to specify the functionality of their system at the right level of abstraction. DSLs and models have historically been used for programming complex systems. However recently they have garnered interest as a separate field of study...
Full Text Available The objectives for this research are to 1 build a development English language learning management strategies model to enhance communicative competence for high school students 2 study the results of using the model. A target group is seven English teachers in Pibulwittayalai School and the sample for studying the results of model to students are ten English club students in Pibulwittayalai School.The research tools are focus group discussion forms, communication plans, English skills evaluation forms, communicative competence test, communicative competence evaluation forms and 21st century skills evaluation forms. This model is examined by connoisseurship.The statistics for analyzing data are frequency, percentage, mean, standard deviation and Wilcoxon test. The results of the research were as follows: 1. The development English language learning management strategies model to enhance communicative competence for high school students had4components ; 1 SWOT–Analysis, 2 strategy development, 3 strategy assessment and 4 strategy adjustment.This model had 6 strategies such as 1 genius academic strategy 2 English through AEC 3 English through World Class 4 enhancing for genius academic in communication with foreigners 5 enhancing English through world class standard and 6 enhancing for potential in English skills learning through world class standard. These were merged as only one strategy as “ Development of students’ potential for communication”. 2. The results of using the model comprised of 2.1 The results to teachers were teachers could analyze SWOT- analysis for determining strength, weakness,opportunity and threat about English language learning management, received guideline and could appropriately and efficiently construct strategies of English language learning management to enhance communicative competence. 2.2 The results to students: The students had 4 English skills, such as listening,speaking, reading and writing. It was
Zorzi, Marco; Testolin, Alberto; Stoianov, Ivilin P.
Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition. PMID:23970869
Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe
The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom. PMID:22670147
Full Text Available The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e. languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: i by keeping a simple syntax while allowing for high-level programming constructs, ii by making code execution easy and avoiding compilation overhead iii allowing a high level of model reusability and the building of complex modular models iv and by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom.
Ganapathiraju Madhavi K
Full Text Available Abstract Background It has been suggested previously that genome and proteome sequences show characteristics typical of natural-language texts such as "signature-style" word usage indicative of authors or topics, and that the algorithms originally developed for natural language processing may therefore be applied to genome sequences to draw biologically relevant conclusions. Following this approach of 'biological language modeling', statistical n-gram analysis has been applied for comparative analysis of whole proteome sequences of 44 organisms. It has been shown that a few particular amino acid n-grams are found in abundance in one organism but occurring very rarely in other organisms, thereby serving as genome signatures. At that time proteomes of only 44 organisms were available, thereby limiting the generalization of this hypothesis. Today nearly 1,000 genome sequences and corresponding translated sequences are available, making it feasible to test the existence of biological language models over the evolutionary tree. Results We studied whole proteome sequences of 970 microbial organisms using n-gram frequencies and cross-perplexity employing the Biological Language Modeling Toolkit and Patternix Revelio toolkit. Genus-specific signatures were observed even in a simple unigram distribution. By taking statistical n-gram model of one organism as reference and computing cross-perplexity of all other microbial proteomes with it, cross-perplexity was found to be predictive of branch distance of the phylogenetic tree. For example, a 4-gram model from proteome of Shigellae flexneri 2a, which belongs to the Gammaproteobacteria class showed a self-perplexity of 15.34 while the cross-perplexity of other organisms was in the range of 15.59 to 29.5 and was proportional to their branching distance in the evolutionary tree from S. flexneri. The organisms of this genus, which happen to be pathotypes of E.coli, also have the closest perplexity values with
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
O'Connor, Martin J; Das, Amar
The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.
Edlund, Kristian Skjoldborg; Michelsen, Axel Gottlieb; Rudie, Karen
A method for modelling the class of discrete-event systems that characterise flowlines is developed. The legal languages are modelled as a set of inequalities, which effectively reduces the amount of memory needed for implementing the resulting supervisors, called inequality supervisors. An example...... demonstrates that the use of inequality supervisors can lead to an implementation where the memory usage is significantly reduced compared to both centralised and modular supervisors. In this way, the state-space explosion is mitigated by the approach presented here. Furthermore, the approach indicates...
Green, Robin; Shou, Wenying
The ability to explain biological phenomena with mathematics and to generate predictions from mathematical models is critical for understanding and controlling natural systems. Concurrently, the rise in open-source software has greatly increased the ease at which researchers can implement their own mathematical models. With a reasonably sound understanding of mathematics and programming skills, a researcher can quickly and easily use such tools for their own work. The purpose of this chapter is to expose the reader to one such tool, the open-source programming language R, and to demonstrate its practical application to studying population dynamics. We use the Lotka-Volterra predator-prey dynamics as an example.
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
Schultz, Ulrik Pagh; Stinckwich, Serge
that domain. Models offer a high-level way for domain users to specify the functionality of their system at the right level of abstraction. DSLs and models have historically been used for programming complex systems. However recently they have garnered interest as a separate field of study. Robotic systems......Proceedings of the Second International Workshop on Domain-Specific Languages and Models for Robotic Systems (DSLRob'11), held in conjunction with the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), September 2011 in San Francisco, USA. The main topics...... of the workshop were Domain-Specific Languages (DSLs) and Model-driven Software Development (MDSD) for robotics. A domain-specific language (DSL) is a programming language dedicated to a particular problem domain that offers specific notations and abstractions that increase programmer productivity within...
Full Text Available Many principals or heads of English departments usually use supervising checklists to monitor or evaluate their teachers' performance. As a matter of fact, teachers may not feel satisfied with the feedback they have got from their superiors. This paper aims at inspiring them with ideas of self-learning to improve their own teaching performance for professional development. In this paper, the writer would like to share his own experience as a principal and a head of the English department by exploring self-evaluation models to monitor language teachers' performance in the classroom. For this purpose, it is necessary to identify the needs of language teachers and later this teacher portfolio may also help principals or head of the department evaluate their teachers' performance.
Full Text Available When adapting an existing speech recognition system to a new language, major development costs are associated with the creation of an appropriate acoustic model (AM. For its training, a certain amount of recorded and annotated speech is required. In this paper, we show that not only the annotation process, but also the process of speech acquisition can be automated to minimize the need of human and expert work. We demonstrate the proposed methodology on Croatian language, for which the target AM has been built via cross-lingual adaptation of a Czech AM in 2 ways: a using commercially available GlobalPhone database, and b by automatic speech data mining from HRT radio archive. The latter approach is cost-free, yet it yields comparable or better results in LVCSR experiments conducted on 3 Croatian test sets.
Stanley G M Ridge
Full Text Available The repeated claim that Afrikaans provides a useful model for planning the development of theAfrican languages is examined critically in this article with a view to elucidating importantissues in South African language planning. The first part acknowledges the fascination of alanguage which has developed sofast for all domains of use, before examining thefactors whichdrove that development, identifying particularly its affinity to Dutch and its tempestuous politicaland social history as making it a deceptive model for African languages. The second partexplores some aspects of its history which do suggest valuable perspectives for other languagesat this stage of our history. The examples discussed are language medium in schooling, thedangers of loss of confidence in a language considered with the possibilities of effective statusinterventions, and the complex issues surrounding language standards. The third part brieflyexamines the history of Afrikaans's ambiguous relation to English. It notes the persistence of theEnglish 'enemy' metaphor along with a practical demand for English, considers the racialpolitics of the statutory equality debate at the time of Union, and analyses three distortionsoccasioned by an ambiguous attitude to English in a contemporary discussion of the role of Afrikaans. Finally, the notion of a model or example is itself questioned. The article proposesdeveloping a deeper understanding of actual needs and attitudes in an ongoing process oflanguage planning as the most likely way of doingjustice to all South Africa's languages.Die herhaalde aanspraak dat Afrikaans 'n bruikbare model bied waarvolgens die ontwikkelingvan Afrika-tale beplan kan word, word in hierdie artikel krities ondersoek met die oog daaropom belangrike vraagstukke in die Suid-Afrikaanse taalbeplanning toe te lig. Die eerste deel geeerkenning aan die bekoring van 'n taal wat op aile gebruiksterreine baie vinnig ontwikkel het,voordat die faktore wat aan
Friendenthal,Sanford; Steiner, Rick
This book is the bestselling, authoritative guide to SysML for systems and software engineers, providing a comprehensive and practical resource for modeling systems with SysML. Fully updated to cover newly released version 1.3, it includes a full description of the modeling language along with a quick reference guide, and shows how an organization or project can transition to model-based systems engineering using SysML, with considerations for processes, methods, tools, and training. Numerous examples help readers understand how SysML can be used in practice, while reference material facilitates studying for the OMG Systems Modeling Professional (OCSMP) Certification Program, designed to test candidates' knowledge of SysML and their ability to use models to represent real-world systems.
Sien, Ven Yu
Object-oriented analysis and design (OOAD) is not an easy subject to learn. There are many challenges confronting students when studying OOAD. Students have particular difficulty abstracting real-world problems within the context of OOAD. They are unable to effectively build object-oriented (OO) models from the problem domain because they essentially do not know "what" to model. This article investigates the difficulties and misconceptions undergraduate students have with analysing systems using unified modelling language analysis class and sequence diagrams. These models were chosen because they represent important static and dynamic aspects of the software system under development. The results of this study will help students produce effective OO models, and facilitate software engineering lecturers design learning materials and approaches for introductory OOAD courses.
Colaiori, Francesca; Castellano, Claudio; Cuskley, Christine F; Loreto, Vittorio; Pugliese, Martina; Tria, Francesca
Empirical evidence shows that the rate of irregular usage of English verbs exhibits discontinuity as a function of their frequency: the most frequent verbs tend to be totally irregular. We aim to qualitatively understand the origin of this feature by studying simple agent-based models of language dynamics, where each agent adopts an inflectional state for a verb and may change it upon interaction with other agents. At the same time, agents are replaced at some rate by new agents adopting the regular form. In models with only two inflectional states (regular and irregular), we observe that either all verbs regularize irrespective of their frequency, or a continuous transition occurs between a low-frequency state, where the lemma becomes fully regular, and a high-frequency one, where both forms coexist. Introducing a third (mixed) state, wherein agents may use either form, we find that a third, qualitatively different behavior may emerge, namely, a discontinuous transition in frequency. We introduce and solve analytically a very general class of three-state models that allows us to fully understand these behaviors in a unified framework. Realistic sets of interaction rules, including the well-known naming game (NG) model, result in a discontinuous transition, in agreement with recent empirical findings. We also point out that the distinction between speaker and hearer in the interaction has no effect on the collective behavior. The results for the general three-state model, although discussed in terms of language dynamics, are widely applicable.
Khandelwal, A.; Rajan, K. S.
Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.
McDonald, David; Proctor, Penny; Gill, Wendy; Heaven, Sue; Marr, Jane; Young, Jane
Intensive Speech and Language Therapy (SLT) training courses for Early Childhood Educators (ECEs) can have a positive effect on their use of interaction strategies that support children's communication skills. The impact of brief SLT training courses is not yet clearly understood. The aims of these two studies were to assess the impact of a brief…
Full Text Available The seventh issue of Complex Systems Informatics and Modeling Quarterly presents five papers devoted to two distinct research topics: systems modeling and natural language processing (NLP. Both of these subjects are very important in computer science. Through modeling we can simplify the studied problem by concentrating on only one aspect at a time. Moreover, a properly constructed model allows the modeler to work on higher levels of abstraction and not having to concentrate on details. Since the size and complexity of information systems grows rapidly, creating good models of such systems is crucial. The analysis of natural language is slowly becoming a widely used tool in commerce and day to day life. Opinion mining allows recommender systems to provide accurate recommendations based on user-generated reviews. Speech recognition and NLP are the basis for such widely used personal assistants as Apple’s Siri, Microsoft’s Cortana, and Google Now. While a lot of work has already been done on natural language processing, the research usually concerns widely used languages, such as English. Consequently, natural language processing in languages other than English is very relevant subject and is addressed in this issue.
Gajo, Laurent; Matthey, Marinette
The process and issues of labeling language program models is discussed insofar as it may affect the objectives, content, and method of instruction. The first section describes the origins, stated objectives, and implementation of a Neuchatel (Switzerland) school program integrating heritage language and culture into the school curriculum, noting…
Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I
The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.
Redford, Melissa A
Speaking is an intentional activity. It is also a complex motor skill; one that exhibits protracted development and the fully automatic character of an overlearned behavior. Together these observations suggest an analogy with skilled behavior in the non-language domain. This analogy is used here to argue for a model of production that is grounded in the activity of speaking and structured during language acquisition. The focus is on the plan that controls the execution of fluent speech; specifically, on the units that are activated during the production of an intonational phrase. These units are schemas: temporally structured sequences of remembered actions and their sensory outcomes. Schemas are activated and inhibited via associated goals, which are linked to specific meanings. Schemas may fuse together over developmental time with repeated use to form larger units, thereby affecting the relative timing of sequential action in participating schemas. In this way, the hierarchical structure of the speech plan and ensuing rhythm patterns of speech are a product of development. Individual schemas may also become differentiated during development, but only if subsequences are associated with meaning. The necessary association of action and meaning gives rise to assumptions about the primacy of certain linguistic forms in the production process. Overall, schema representations connect usage-based theories of language to the action of speaking.
Whitlow, Johnathan E.; Engrand, Peter
The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.
Fleishman, Sarel J; Leaver-Fay, Andrew; Corn, Jacob E; Strauch, Eva-Maria; Khare, Sagar D; Koga, Nobuyasu; Ashworth, Justin; Murphy, Paul; Richter, Florian; Lemmon, Gordon; Meiler, Jens; Baker, David
Macromolecular modeling and design are increasingly useful in basic research, biotechnology, and teaching. However, the absence of a user-friendly modeling framework that provides access to a wide range of modeling capabilities is hampering the wider adoption of computational methods by non-experts. RosettaScripts is an XML-like language for specifying modeling tasks in the Rosetta framework. RosettaScripts provides access to protocol-level functionalities, such as rigid-body docking and sequence redesign, and allows fast testing and deployment of complex protocols without need for modifying or recompiling the underlying C++ code. We illustrate these capabilities with RosettaScripts protocols for the stabilization of proteins, the generation of computationally constrained libraries for experimental selection of higher-affinity binding proteins, loop remodeling, small-molecule ligand docking, design of ligand-binding proteins, and specificity redesign in DNA-binding proteins.
R, an open-source statistical language and data analysis tool, is gaining popularity among psychologists currently teaching statistics. R is especially suitable for teaching advanced topics, such as fitting the dichotomous Rasch model--a topic that involves transforming complicated mathematical formulas into statistical computations. This article describes R's use as a teaching tool and a data analysis software program in the analysis of the Rasch model in item response theory. It also explains thetheory behind, as well as an educator's goals for, fitting the Rasch model with joint maximum likelihood estimation. This article also summarizes the R syntax for parameter estimation and the calculation of fit statistics. The results produced by R is compared with the results obtained from MINISTEP and the output of a conditional logit model. The use of R is encouraged because it is free, supported by a network of peer researchers, and covers both basic and advanced topics in statistics frequently used by psychologists.
Beaujean, A Alexander; Parkin, Jason; Parker, Sonia
Previous research using the Cattell-Horn-Carroll (CHC) theory of cognitive abilities has shown a relationship between cognitive ability and academic achievement. Most of this research, however, has been done using the Woodcock-Johnson family of instruments with a higher order factor model. For CHC theory to grow, research should be done with other assessment instruments and tested with other factor models. This study examined the relationship between different factor models of CHC theory and the factors' relationships with language-based academic achievement (i.e., reading and writing). Using the co-norming sample for the Wechsler Intelligence Scale for Children--4th Edition and the Wechsler Individual Achievement Test--2nd Edition, we found that bifactor and higher order models of the subtests of the Wechsler Intelligence Scale for Children-4th Edition produced a different set of Stratum II factors, which, in turn, have very different relationships with the language achievement variables of the Wechsler Individual Achievement Test--2nd Edition. We conclude that the factor model used to represent CHC theory makes little difference when general intelligence is of major interest, but it makes a large difference when the Stratum II factors are of primary concern, especially when they are used to predict other variables. PsycINFO Database Record (c) 2014 APA, all rights reserved.
needed. In addition, it will be important to conduct empirical evaluation of the applicability and benefits of OWL-S in developing and managing service...Sycara, and T. Nishimura, "Towards a Semantic Web Ecommerce ," in Proceedings of 6th Conference on Business Information Systems (BIS2003), Colorado...various Web services standardization efforts is the vision of the enormous benefit to be had by achieving reliable, ubiquitous software
Charlton, Scott R.; Parkhurst, David L.
The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server—for example, Excel®, Visual Basic®, Python, or MATLAB". PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations.
Full Text Available This study is the initial part of a doctoral dissertation research conducted with the aim at designing a learning model in teaching speaking according to the needs of faculty and students. The learning model is designed based on curriculum of Indonesian Language Education and Literature study program, IKIP PGRI Bojonegoro, UnirowTuban, and UnisdaLamongan, East Java, Indonesia. The development of this model is done to improve the students speaking skills. Research and development are the steps consist of a needs analysis, document analysis, design models, development models and experimental models. Needs analysis was conducted by researchers to the students of the first semester and three teachers’ and the head of study program of IKIP PGRI Bojonegoro, UnirowTuban and Unisdalamongan to get information related to the needs of students and faculty to model of learning speaking. Needs analysis and documents analysis were collected through questionnaires, interviews, and discussions with students and academics. Document and needs analysis in this study a syllabus, lesson plan (RPP and the model used for this study. This research was carried out by following the nature of the procedures of research and development covering the steps of (1 an exploratory study, (2 the stage of development, (3 the testing phase models, (4 dissemination (Borg and Gall (1983 and Sukmadinata (2008. The results of the analysis of questionnaires, and interviews revealed that lecturers need guidelines for the implementation of learning speaking. Learning model strategies wite that foster self-confidence in speaking is needed by students’.
Given the emerging focus on the intercultural dimension in language teaching and learning, language educators have been exploring the use of information and communications technology ICT-mediated language learning environments to link learners in intercultural language learning communities around the globe. Despite the potential promise of…
Childs, Tucker; Good, Jeff; Mitchell, Alice
Most language documentation efforts focus on capturing lexico-grammatical information on individual languages. Comparatively little effort has been devoted to considering a language's sociolinguistic contexts. In parts of the world characterized by high degrees of multilingualism, questions surrounding the factors involved in language choice and…
May, John W; James, A Gordon; Steinbeck, Christoph
Genome-scale metabolic models often lack annotations that would allow them to be used for further analysis. Previous efforts have focused on associating metabolites in the model with a cross reference, but this can be problematic if the reference is not freely available, multiple resources are used or the metabolite is added from a literature review. Associating each metabolite with chemical structure provides unambiguous identification of the components and a more detailed view of the metabolism. We have developed an open-source desktop application that simplifies the process of adding database cross references and chemical structures to genome-scale metabolic models. Annotated models can be exported to the Systems Biology Markup Language open interchange format. Source code, binaries, documentation and tutorials are freely available at http://johnmay.github.com/metingear. The application is implemented in Java with bundles available for MS Windows and Macintosh OS X.
Wills, Tarrin Jon
, however, has limitations in representing semantic relationships that do not conform to the tree model. This article presents the relational data model as a way of representing the structure of skaldic texts and their contextual environment. The relational data model raises both problems and possibilities......Skaldic poetry is a highly complex textual phenomenon both in terms of the intricacy of the poetry and its contextual environment. Extensible Markup Language (XML) applications such as that of the Text Encoding Initiative provide a means of semantic representation of some of these complexities. XML...... for this type of project. The main problem addressed here is the representation of the syntagmatic structures of texts in a data model that is not intrinsically ordered. The advantages are also explored, including networked data editing and management, quantitative linguistic analysis, dynamic representation...
Formulaic language studies remain less well recognized in language disorders. Profiles of differential formulaic language abilities in neurological disease have implications for cerebral models of language and for clinical evaluation and treatment of neurogenic language disorders.
Vorobjeva, Lyudmila V.; Vladimirova, Tatiana L.; Filippova, Elena M.
This article reviews the methods of training foreign students in the fluent dialogic communication skill of the business sphere. Training of the dialogic communication in business sphere is understood to be a key method of professional competences' development with the students. The system of the dialogic communication skills development in the business sphere is included in the general system of the development of oral communication skills in teaching Russian as a Second Language. Described ...
Razdobudko-Čović Larisa I.
Full Text Available The paper presents an analysis of two Serbian translations of V. Nabokov's memoirs, that is the translation of the novel 'Drugie berega' ('The Other Shores' published in Russian as an authorized translation from the original English version 'Conclusive Evidence', and the translation of Nabokov's authorized translation from Russian to English entitled 'Speak, Memory'. Creolization of three models of culture in translation from the two originals - Russian and English - Is presented. Specific features of the two Serbian translations are analyzed, and a survey of characteristic mistakes caused by some specific characteristics of the source language is given. Also, Nabokov's very original approach to translation which is quite interpretative is highlighted.
Ji, Xinye; Shen, Chaopeng
Geoscientific models manage myriad and increasingly complex data structures as trans-disciplinary models are integrated. They often incur significant redundancy with cross-cutting tasks. Reflection, the ability of a program to inspect and modify its structure and behavior at runtime, is known as a powerful tool to improve code reusability, abstraction, and separation of concerns. Reflection is rarely adopted in high-performance Geoscientific models, especially with Fortran, where it was previously deemed implausible. Practical constraints of language and legacy often limit us to feather-weight, native-language solutions. We demonstrate the usefulness of a structural-reflection-emulating, dynamically-linked metaObjects, gd. We show real-world examples including data structure self-assembly, effortless input/output (IO) and upgrade to parallel I/O, recursive actions and batch operations. We share gd and a derived module that reproduces MATLAB-like structure in Fortran and C++. We suggest that both a gd representation and a Fortran-native representation are maintained to access the data, each for separate purposes. Embracing emulated reflection allows generically-written codes that are highly re-usable across projects.
Full Text Available Over time, technical systems such as automobiles or spacecraft have grown more complex due to the incorporation of increasingly more and different components. The integration of these components, which are frequently designed and constructed within separate departments and companies may lead to malfunctioning systems as their interplay cannot be tested within the earlier phases of development. This paper introduces compatibility management as one solution to the problems of late component integration. Compatibility management is carried out on a common crossdomain model of the system and therefore allows to test compatibility early on. We show how compatibility management can be embedded into the phased development of ECSS-M-30A and present the (Unified Compatibility Modeling Language ((UCML, which is used for the underlying cross-domain model. A case study demonstrates the application of (UCML in the development of a small satellite and explains different degrees of compatibility.
Kerlyl, Alice; Hall, Phil; Bull, Susan
There is an extensive body of work on Intelligent Tutoring Systems: computer environments for education, teaching and training that adapt to the needs of the individual learner. Work on personalisation and adaptivity has included research into allowing the student user to enhance the system's adaptivity by improving the accuracy of the underlying learner model. Open Learner Modelling, where the system's model of the user's knowledge is revealed to the user, has been proposed to support student reflection on their learning. Increased accuracy of the learner model can be obtained by the student and system jointly negotiating the learner model. We present the initial investigations into a system to allow people to negotiate the model of their understanding of a topic in natural language. This paper discusses the development and capabilities of both conversational agents (or chatbots) and Intelligent Tutoring Systems, in particular Open Learner Modelling. We describe a Wizard-of-Oz experiment to investigate the feasibility of using a chatbot to support negotiation, and conclude that a fusion of the two fields can lead to developing negotiation techniques for chatbots and the enhancement of the Open Learner Model. This technology, if successful, could have widespread application in schools, universities and other training scenarios.
Full Text Available Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH. A hemispheric functional lateralization index (HFLI for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely "Typical" (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH, "Ambilateral" (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH and "Strongly-atypical" (right-hemisphere dominance with clear negative HFLI values, 7% of LH. Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.
consider the pros and cons of the two primary ways applications can be designed and programmed: (1) native applications and (2) applications using HTML5 ...content and functionality of the application. HTML5 is considered the fifth iteration, or version, of Hypertext Markup Language (HTML) and is used...mobile devices, etc.). HTML5 builds on the previous coding standards with the addition of certain improvements. One of the improvements most
Haya, Mariela; Franch, Xavier; Mayol, Enric
Business management is a complex task that can be facilitated using different methodologies and models. One of their most relevant purposes is to align the organization strategy with the daily functioning of the organization. One of these models is the Balanced Scorecard (BSC). In this paper, we propose a modeling strategy for the BSC implantation process. We will model it using UML Activity Diagrams and Strategy Dependency models of the language i*. The Activity Diagrams allow determining th...
LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346
LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.
Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.
Kokkinn, Beverley; Stupans, Ieva
The pilot project, described in this paper, targeted English as an additional language (EAL) students to facilitate their development of patient counselling communication skills. An interdisciplinary content-based model was developed drawing on an interactional sociolinguistic framework to map language use valued in pharmacy counselling. Evaluation included analysis of successive self-assessments and surveys of students, surveys of teaching staff and final test results. Evaluation indicated that the interdisciplinary model was highly successful in improving EAL students' competency in pharmacy counselling. The model may have possible wider application for education in health professional programmes. © 2011 The Authors. IJPP © 2011 Royal Pharmaceutical Society.
Kurtev, Ivan; van den Berg, Klaas; Aßmann, Uwe; Aksit, Mehmet; Rensink, Arend
n the Meta Object Facility (MOF) meta-modeling architecture a number of model transformation scenarios can be identified. It could be expected that a meta-modeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite
Full Text Available In the article, the acute problem of implementation of pedagogical innovations and online technologies into the educational process is analyzed. The article explores the advantages of blended learning as a latter-day educational program in comparison with traditional campus learning. Blended learning is regarded worldwide as the combination of classroom face-to-face sessions with interactive learning opportunities created online. The purpose of the article is to identify blended learning transformational potential impacting students and teachers by ensuring a more personalized learning experience. The concept of blended learning, as a means to enhance foreign language teaching and learning in the classroom during the traditional face-to-face interaction between a teacher and a student, combined with computer-mediated activities, is examined. In the article, the main classification of blended learning models is established. There are four main blended learning models which include both face-to-face instruction time and online learning: Rotation Model, Flex Model, A La Carte Model, and Enriched Virtual Model. Once implemented successfully, a blended model can take advantage of both brick-and-mortar and digital worlds, providing significant benefits for the educational establishments and learners. To integrate any of the blended learning models, a teacher can create online activities that enable learners to explore the topic online at home, and then develop face-to-face interactions to dig deeper into the subject matter at the lesson. The use of blended learning models in order to expand educational opportunities for students while the foreign language acquisition, by increasing the availability and flexibility of education, taking into account student individual learning needs, with some element of student control over time, place and pace, is explored. The realization of blended learning models in regards to age and physiological peculiarities of
Michael D. Kickmeier-Rust
Full Text Available The uptake of information and communication technologies in the classrooms is a key trend over the past years and decades. Teachers are using Moodle courses, e-Portfolios, Google Docs, perhaps learning games or virtual worlds such as OpenSim for educational purposes. A second trend pushes towards a formatively inspired assessment and feedback, often combined with attempts of educational data mining and learning analytics. In this paper we present a role model for teaching English as a second language using OpenSim and a tool that enables teachers to perform real-time learning analytics and direct formative feedback and interventions in the virtual learning session. Also we present an approach to aggregate and store the learning information into open learner models.
Millions of Americans in all age groups are affected by deafness and impaired hearing. They communicate with others using the American Sign Language (ASL). Teaching is tutorial (person-to-person) or with limited video content. We believe that high resolution 3D models and their animations can be used to effectively teach the ASL, with the following advantages over the traditional teaching approach: a) signing can be played at varying speeds and as many times as necessary, b) being 3-D constructs, models can be viewed from diverse angles, c) signing can be applied to different characters (male, female, child, elderly, etc.), d) special editing like close-ups, picture-in-picture, and phantom movements, can make learning easier, and e) clothing, surrounding environment and lighting conditions can be varied to present the student to less than ideal situations.
Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.
Baldow, Christoph; Salentin, Sebastian; Schroeder, Michael; Roeder, Ingo; Glauche, Ingmar
Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.
Lan, Yu-Ju; Chang, Kuo-En; Chen, Nian-Shing
In response to the need to cultivate pre-service Chinese as a foreign language (CFL) teachers' information and communication technology (ICT) competency in online synchronous environments, this research adopted a three-stage cyclical model named "cooperation-based cognition, action, and reflection" (CoCAR). The model was implemented in an 18-week…
Munoz Fernandez, Michela Miche
The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.
Full Text Available Javanese cultural words are the linguistic units which are very specific to Javanese culture and society. This article aims at describing what Javanese cultural words that are found in the local newspapers, what they represent, and why they are used in the local newspapers in Central Java. Non-participant observation is used to present the data for analysis, continued with page-filing and note-taking techniques. Referential, reflective-introspective, and abductive inferential methods are used to analyze the data. The result indicates that the Javanese cultural words found in the local newspapers represent festivals, rituals, Javanese ways of life, social activities, actions, feelings, thoughts, behavior, and experiences. The words become the indicators that the journalists of the local newspapers in Cental Java have positive attitudes toward Javanese words. This becomes a model for language maintenance of Javanese. This implies that the words are stored in the long-term memory, that become mental image, which are used when needed by the user for communication. The existence of the concepts residing in the mind will make the Javanese language maintenance possible, which is supported by the attitudes of the Javanese.
Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor
This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…
Zhao, Xiaowei; Li, Ping
Cross-language priming is a widely used experimental paradigm in psycholinguistics to study how bilinguals' two languages are represented and organized. Researchers have observed a number of interesting patterns from the priming effects of both translation equivalents and semantically related word pairs across languages. In this study, we…
Snyder, Delys Waite; Nielson, Rex P.; Kurzer, Kendon
Within the growing field of scholarly literature on foreign language (FL) writing pedagogy, few studies have addressed pedagogical questions regarding the teaching of writing to advanced language learners. Writing fellows peer tutoring programs, although typically associated with first language writing instruction, likely can benefit and support…
Linck, Jared A.; Cunnings, Ian
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a heterogeneous group, and it is not always clear how a study's findings may generalize to other individuals who may differ in terms of language background…
Galishninkova Elena M.
Full Text Available The paper considers the problem of adaptation of the first-year students to professional activity by means of foreign language. To design the adaptation model, we developed four-block questionnaires to determine students’ readiness for adaptation. The experiment resulted in the three groups of students with high, average and low levels of adaptation. Students with low level of adaptation become the target of our research. To remove the difficulties in studying a foreign language by the third group of students, an adaptation model was elaborated. Further, we identified the conditions for the effective implementation of the adaptation model of students to vocational training. In our view, these pedagogical conditions promote a more “sparing” transition of students to their main function as first-year students and increase the level of foreign language learning as well as improve the educational indicators.
Stella, Massimo; Brede, Markus
In this paper we provide a quantitative framework for the study of phonological networks (PNs) for the English language by carrying out principled comparisons to null models, either based on site percolation, randomization techniques, or network growth models. In contrast to previous work, we mainly focus on null models that reproduce lower order characteristics of the empirical data. We find that artificial networks matching connectivity properties of the English PN are exceedingly rare: this leads to the hypothesis that the word repertoire might have been assembled over time by preferentially introducing new words which are small modifications of old words. Our null models are able to explain the ‘power-law-like’ part of the degree distributions and generally retrieve qualitative features of the PN such as high clustering, high assortativity coefficient and small-world characteristics. However, the detailed comparison to expectations from null models also points out significant differences, suggesting the presence of additional constraints in word assembly. Key constraints we identify are the avoidance of large degrees, the avoidance of triadic closure and the avoidance of large non-percolating clusters.
Full Text Available This paper proposes a note-based music language model (MLM for improving note-level polyphonic piano transcription. The MLM is based on the recurrent structure, which could model the temporal correlations between notes in music sequences. To combine the outputs of the note-based MLM and acoustic model directly, an integrated architecture is adopted in this paper. We also propose an inference algorithm, in which the note-based MLM is used to predict notes at the blank onsets in the thresholding transcription results. The experimental results show that the proposed inference algorithm improves the performance of note-level transcription. We also observe that the combination of the restricted Boltzmann machine (RBM and recurrent structure outperforms a single recurrent neural network (RNN or long short-term memory network (LSTM in modeling the high-dimensional note sequences. Among all the MLMs, LSTM-RBM helps the system yield the best results on all evaluation metrics regardless of the performance of acoustic models.
Ingram, Sandra W.
This quantitative comparative descriptive study involved analyzing archival data from end-of-course (EOC) test scores in biology of English language learners (ELLs) taught or not taught using the sheltered instruction observation protocol (SIOP) model. The study includes descriptions and explanations of the benefits of the SIOP model to ELLs, especially in content area subjects such as biology. Researchers have shown that ELLs in high school lag behind their peers in academic achievement in content area subjects. Much of the research on the SIOP model took place in elementary and middle school, and more research was necessary at the high school level. This study involved analyzing student records from archival data to describe and explain if the SIOP model had an effect on the EOC test scores of ELLs taught or not taught using it. The sample consisted of 527 Hispanic students (283 females and 244 males) from Grades 9-12. An independent sample t-test determined if a significant difference existed in the mean EOC test scores of ELLs taught using the SIOP model as opposed to ELLs not taught using the SIOP model. The results indicated that a significant difference existed between EOC test scores of ELLs taught using the SIOP model and ELLs not taught using the SIOP model (p = .02). A regression analysis indicated a significant difference existed in the academic performance of ELLs taught using the SIOP model in high school science, controlling for free and reduced-price lunch (p = .001) in predicting passing scores on the EOC test in biology at the school level. The data analyzed for free and reduced-price lunch together with SIOP data indicated that both together were not significant (p = .175) for predicting passing scores on the EOC test in high school biology. Future researchers should repeat the study with student-level data as opposed to school-level data, and data should span at least three years.
Dale, Larry; Millstein, Dev; Coughlin, Katie; Van Buskirk, Robert; Rosenquist, Gregory; Lekov, Alex; Bhuyan, Sanjib
In this report we calculate the change in final consumer prices due to minimum efficiency standards, focusing on a standard economic model of the air-conditioning and heating equipment (ACHE) wholesale industry. The model examines the relationship between the marginal cost to distribute and sell equipment and the final consumer price in this industry. The model predicts that the impact of a standard on the final consumer price is conditioned by its impact on marginal distribution costs. For example, if a standard raises the marginal cost to distribute and sell equipment a small amount, the model predicts that the standard will raise the final consumer price a small amount as well. Statistical analysis suggest that standards do not increase the amount of labor needed to distribute equipment the same employees needed to sell lower efficiency equipment can sell high efficiency equipment. Labor is a large component of the total marginal cost to distribute and sell air-conditioning and heating equipment. We infer from this that standards have a relatively small impact on ACHE marginal distribution and sale costs. Thus, our model predicts that a standard will have a relatively small impact on final ACHE consumer prices. Our statistical analysis of U.S. Census Bureau wholesale revenue tends to confirm this model prediction. Generalizing, we find that the ratio of manufacturer price to final consumer price prior to a standard tends to exceed the ratio of the change in manufacturer price to the change in final consumer price resulting from a standard. The appendix expands our analysis through a typical distribution chain for commercial and residential air-conditioning and heating equipment.
Pechak, Celia; Diaz, Deborah; Dillon, Loretta
As the Hispanic population continues to expand in the United States, health professionals increasingly may encounter people who speak Spanish and have limited English proficiency. Responding to these changes, various health profession educators have incorporated Spanish language training into their curricula. Of 12 doctor of physical therapy (DPT) programs identified as including elective or required Spanish courses, the program at The University of Texas at El Paso is the only one integrating required Spanish language training across the curriculum. The purpose of this case report is to describe the development, implementation, and preliminary outcomes of the evolving educational model at The University of Texas at El Paso. The University of Texas at El Paso is situated immediately across the border from Mexico. Responding to the large population with limited English proficiency in the community, faculty began to integrate required Spanish language training during a transition from a master-level to a DPT curriculum. The Spanish language curriculum pillar includes a Spanish medical terminology course, language learning opportunities threaded throughout the clinical courses, clinical education courses, and service-learning. Forty-five DPT students have completed the curriculum. Assessment methods were limited for early cohorts. Clinically relevant Spanish verbal proficiency was assessed with a practical examination in the Spanish course, a clinical instructor-rated instrument, and student feedback. Preliminary data suggested that the model is improving Spanish language proficiency. The model still is evolving. Spanish language learning opportunities in the curriculum are being expanded. Also, problems with the clinical outcome measure have been recognized. Better definition of intended outcomes and validation of a revised tool are needed. This report should promote opportunities for collaboration with others who are interested in linguistic competence. © 2014
Temple, Michael W; Lehmann, Christoph U; Fabbri, Daniel
Discharging patients from the Neonatal Intensive Care Unit (NICU) can be delayed for non-medical reasons including the procurement of home medical equipment, parental education, and the need for children's services. We previously created a model to identify patients that will be medically ready for discharge in the subsequent 2-10 days. In this study we use Natural Language Processing to improve upon that model and discern why the model performed poorly on certain patients. We retrospectively examined the text of the Assessment and Plan section from daily progress notes of 4,693 patients (103,206 patient-days) from the NICU of a large, academic children's hospital. A matrix was constructed using words from NICU notes (single words and bigrams) to train a supervised machine learning algorithm to determine the most important words differentiating poorly performing patients compared to well performing patients in our original discharge prediction model. NLP using a bag of words (BOW) analysis revealed several cohorts that performed poorly in our original model. These included patients with surgical diagnoses, pulmonary hypertension, retinopathy of prematurity, and psychosocial issues. The BOW approach aided in cohort discovery and will allow further refinement of our original discharge model prediction. Adequately identifying patients discharged home on g-tube feeds alone could improve the AUC of our original model by 0.02. Additionally, this approach identified social issues as a major cause for delayed discharge. A BOW analysis provides a method to improve and refine our NICU discharge prediction model and could potentially avoid over 900 (0.9%) hospital days.
Cao, Xin; Cong, Gao; Cui, Bin
and have become important information resources on the Web. To make the body of knowledge accumulated in CQA archives accessible, effective and efficient question search is required. Question search in a CQA archive aims to retrieve historical questions that are relevant to new questions posed by users......Community Question Answering (CQA) has emerged as a popular type of service meeting a wide range of information needs. Such services enable users to ask and answer questions and to access existing question-answer pairs. CQA archives contain very large volumes of valuable user-generated content....... This paper proposes a category-based framework for search in CQA archives. The framework embodies several new techniques that use language models to exploit categories of questions for improving question-answer search. Experiments conducted on real data from Yahoo! Answers demonstrate that the proposed...