WorldWideScience

Sample records for learning resource metadata

  1. Metadata and Ontologies in Learning Resources Design

    Science.gov (United States)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  2. Learning Object Metadata in a Web-Based Learning Environment

    NARCIS (Netherlands)

    Avgeriou, Paris; Koutoumanos, Anastasios; Retalis, Symeon; Papaspyrou, Nikolaos

    2000-01-01

    The plethora and variance of learning resources embedded in modern web-based learning environments require a mechanism to enable their structured administration. This goal can be achieved by defining metadata on them and constructing a system that manages the metadata in the context of the learning

  3. Tags and self-organisation: a metadata ecology for learning resources in a multilingual context

    OpenAIRE

    Vuorikari, Riina Hannuli

    2010-01-01

    Vuorikari, R. (2009). Tags and self-organisation: a metadata ecology for learning resources in a multilingual context. Doctoral thesis. November, 13, 2009, Heerlen, The Netherlands: Open University of the Netherlands, CELSTEC.

  4. Tags and self-organisation: a metadata ecology for learning resources in a multilingual context

    NARCIS (Netherlands)

    Vuorikari, Riina

    2009-01-01

    Vuorikari, R. (2009). Tags and self-organisation: a metadata ecology for learning resources in a multilingual context. Doctoral thesis. November, 13, 2009, Heerlen, The Netherlands: Open University of the Netherlands, CELSTEC.

  5. Social Web Content Enhancement in a Distance Learning Environment: Intelligent Metadata Generation for Resources

    Science.gov (United States)

    García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny

    2017-01-01

    Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…

  6. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    Science.gov (United States)

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  7. Assuring the Quality of Agricultural Learning Repositories: Issues for the Learning Object Metadata Creation Process of the CGIAR

    Science.gov (United States)

    Zschocke, Thomas; Beniest, Jan

    The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.

  8. Handling multiple metadata streams regarding digital learning material

    NARCIS (Netherlands)

    Roes, J.B.M.; Vuuren, J. van; Verbeij, N.; Nijstad, H.

    2010-01-01

    This paper presents the outcome of a study performed in the Netherlands on handling multiple metadata streams regarding digital learning material. The paper describes the present metadata architecture in the Netherlands, the present suppliers and users of metadata and digital learning materials. It

  9. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality

  10. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    Science.gov (United States)

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses

  11. Exploring the Relevance of Europeana Digital Resources: Preliminary Ideas on Europeana Metadata Quality

    Directory of Open Access Journals (Sweden)

    Paulo Alonso Gaona-García

    2017-01-01

    Full Text Available Europeana is a European project aimed to become the modern “Alexandria Digital Library”, as it targets providing access to thousands of resources of European cultural heritage, contributed by more than fifteen hundred institutions such as museums, libraries, archives and cultural centers. This article aims to explore Europeana digital resources as open learning repositories in order to re-use digital resources to improve learning process in the domain of arts and cultural heritage. To carry out this purpose, we present results of metadata quality based on a study case associated to recommendations and suggestions that provide this type of initiatives in our educational context in order to improve the access of digital resources according to a specific knowledge areas.

  12. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  13. Information resource description creating and managing metadata

    CERN Document Server

    Hider, Philip

    2012-01-01

    An overview of the field of information organization that examines resource description as both a product and process of the contemporary digital environment.This timely book employs the unifying mechanism of the semantic web and the resource description framework to integrate the various traditions and practices of information and knowledge organization. Uniquely, it covers both the domain-specific traditions and practices and the practices of the ?metadata movement' through a single lens ? that of resource description in the broadest, semantic web sense.This approach more readily accommodate

  14. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  15. USGIN ISO metadata profile

    Science.gov (United States)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  16. CLOUD EDUCATIONAL RESOURCES FOR PHYSICS LEARNING RESEARCHES SUPPORT

    Directory of Open Access Journals (Sweden)

    Oleksandr V. Merzlykin

    2015-10-01

    Full Text Available The definition of cloud educational resource is given in paper. Its program and information components are characterized. The virtualization as the technological ground of transforming from traditional electronic educational resources to cloud ones is reviewed. Such levels of virtualization are described: data storage device virtualization (Data as Service, hardware virtualization (Hardware as Service, computer virtualization (Infrastructure as Service, software system virtualization (Platform as Service, «desktop» virtualization (Desktop as Service, software user interface virtualization (Software as Service. Possibilities of designing the cloud educational resources system for physics learning researches support taking into account standards of learning objects metadata (accessing via OAI-PMH protocol and standards of learning tools interoperability (LTI are shown. The example of integration cloud educational resources into Moodle learning management system with use of OAI-PMH and LTI is given.

  17. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    Science.gov (United States)

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  18. The Benefits and Future of Standards: Metadata and Beyond

    Science.gov (United States)

    Stracke, Christian M.

    This article discusses the benefits and future of standards and presents the generic multi-dimensional Reference Model. First the importance and the tasks of interoperability as well as quality development and their relationship are analyzed. Especially in e-Learning their connection and interdependence is evident: Interoperability is one basic requirement for quality development. In this paper, it is shown how standards and specifications are supporting these crucial issues. The upcoming ISO metadata standard MLR (Metadata for Learning Resource) will be introduced and used as example for identifying the requirements and needs for future standardization. In conclusion a vision of the challenges and potentials for e-Learning standardization is outlined.

  19. An Automatic Indicator of the Reusability of Learning Objects Based on Metadata That Satisfies Completeness Criteria

    Science.gov (United States)

    Sanz-Rodríguez, Javier; Margaritopoulos, Merkourios; Margaritopoulos, Thomas; Dodero, Juan Manuel; Sánchez-Alonso, Salvador; Manitsaris, Athanasios

    The search for learning objects in open repositories is currently a tedious task, owing to the vast amount of resources available and the fact that most of them do not have associated ratings to help users make a choice. In order to tackle this problem, we propose a reusability indicator, which can be calculated automatically using the metadata that describes the objects, allowing us to select those materials most likely to be reused. In order for this reusability indicator to be applied, metadata records must reach a certain amount of completeness, guaranteeing that the material is adequately described. This reusability indicator is tested in two studies on the Merlot and eLera repositories, and results obtained offer evidence to support their effectiveness.

  20. Facilitating Teachers' Reuse of Mobile Assisted Language Learning Resources Using Educational Metadata

    Science.gov (United States)

    Zervas, Panagiotis; Sampson, Demetrios G.

    2014-01-01

    Mobile assisted language learning (MALL) and open access repositories for language learning resources are both topics that have attracted the interest of researchers and practitioners in technology enhanced learning (TeL). Yet, there is limited experimental evidence about possible factors that can influence and potentially enhance reuse of MALL…

  1. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  2. DataNet: A flexible metadata overlay over file resources

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Managing and sharing data stored in files results in a challenge due to data amounts produced by various scientific experiments [1]. While solutions such as Globus Online [2] focus on file transfer and synchronization, in this work we propose an additional layer of metadata over file resources which helps to categorize and structure the data, as well as to make it efficient in integration with web-based research gateways. A basic concept of the proposed solution [3] is a data model consisting of entities built from primitive types such as numbers, texts and also from files and relationships among different entities. This allows for building complex data structure definitions and mix metadata and file data into a single model tailored for a given scientific field. A data model becomes actionable after being deployed as a data repository which is done automatically by the proposed framework by using one of the available PaaS (platform-as-a-service) platforms and is exposed to the world as a REST service, which...

  3. The metadata manual a practical workbook

    CERN Document Server

    Lubas, Rebecca; Schneider, Ingrid

    2013-01-01

    Cultural heritage professionals have high levels of training in metadata. However, the institutions in which they practice often depend on support staff, volunteers, and students in order to function. With limited time and funding for training in metadata creation for digital collections, there are often many questions about metadata without a reliable, direct source for answers. The Metadata Manual provides such a resource, answering basic metadata questions that may appear, and exploring metadata from a beginner's perspective. This title covers metadata basics, XML basics, Dublin Core, VRA C

  4. Educational Rationale Metadata for Learning Objects

    Directory of Open Access Journals (Sweden)

    Tom Carey

    2002-10-01

    Full Text Available Instructors searching for learning objects in online repositories will be guided in their choices by the content of the object, the characteristics of the learners addressed, and the learning process embodied in the object. We report here on a feasibility study for metadata to record process-oriented information about instructional approaches for learning objects, though a set of Educational Rationale [ER] tags which would allow authors to describe the critical elements in their design intent. The prototype ER tags describe activities which have been demonstrated to be of value in learning, and authors select the activities whose support was critical in their design decisions. The prototype ER tag set consists descriptors of the instructional approach used in the design, plus optional sub-elements for Comments, Importance and Features which implement the design intent. The tag set was tested by creators of four learning object modules, three intended for post-secondary learners and one for K-12 students and their families. In each case the creators reported that the ER tag set allowed them to express succinctly the key instructional approaches embedded in their designs. These results confirmed the overall feasibility of the ER tag approach as a means of capturing design intent from creators of learning objects. Much work remains to be done before a usable ER tag set could be specified, including evaluating the impact of ER tags during design to improve instructional quality of learning objects.

  5. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  6. International Metadata Initiatives: Lessons in Bibliographic Control.

    Science.gov (United States)

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  7. The essential guide to metadata for books

    CERN Document Server

    Register, Renee

    2013-01-01

    In The Essential Guide to Metadata for Books, you will learn exactly what you need to know to effectively generate, handle and disseminate metadata for books and ebooks. This comprehensive but digestible document will explain the life-cycle of book metadata, industry standards, XML, ONIX and the essential elements of metadata. It will also show you how effective, well-organized metadata can improve your efforts to sell a book, especially when it comes to marketing, discoverability and converting at the point of sale. This information-packed document also includes a glossary of terms

  8. From CLARIN Component Metadata to Linked Open Data

    NARCIS (Netherlands)

    Durco, M.; Windhouwer, Menzo

    2014-01-01

    In the European CLARIN infrastructure a growing number of resources are described with Component Metadata. In this paper we describe a transformation to make this metadata available as linked data. After this first step it becomes possible to connect the CLARIN Component Metadata with other valuable

  9. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    International Nuclear Information System (INIS)

    Malon, D; Albrand, S; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integrated view to physicists, and support both human use and programmatic access. In this paper we consider ATLAS metadata, metadata services, and metadata flow principally from the illustrative perspective of how disparate metadata are made available to executing jobs and, conversely, how metadata generated by such jobs are returned. We describe how metadata are read, how metadata are cached, and how metadata generated by jobs and the tasks of which they are a part are communicated, associated with data products, and preserved. We also discuss the principles that guide decision-making about metadata storage, replication, and access.

  10. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  11. Constructing a Cross-Domain Resource Inventory: Key Components and Results of the EarthCube CINERGI Project.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Malik, T.; Hsu, L.; Gupta, A.; Grethe, J. S.; Valentine, D. W., Jr.; Lehnert, K. A.; Bermudez, L. E.; Ozyurt, I. B.; Whitenack, T.; Schachne, A.; Giliarini, A.

    2015-12-01

    While many geoscience-related repositories and data discovery portals exist, finding information about available resources remains a pervasive problem, especially when searching across multiple domains and catalogs. Inconsistent and incomplete metadata descriptions, disparate access protocols and semantic differences across domains, and troves of unstructured or poorly structured information which is hard to discover and use are major hindrances toward discovery, while metadata compilation and curation remain manual and time-consuming. We report on methodology, main results and lessons learned from an ongoing effort to develop a geoscience-wide catalog of information resources, with consistent metadata descriptions, traceable provenance, and automated metadata enhancement. Developing such a catalog is the central goal of CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability), an EarthCube building block project (earthcube.org/group/cinergi). The key novel technical contributions of the projects include: a) development of a metadata enhancement pipeline and a set of document enhancers to automatically improve various aspects of metadata descriptions, including keyword assignment and definition of spatial extents; b) Community Resource Viewers: online applications for crowdsourcing community resource registry development, curation and search, and channeling metadata to the unified CINERGI inventory, c) metadata provenance, validation and annotation services, d) user interfaces for advanced resource discovery; and e) geoscience-wide ontology and machine learning to support automated semantic tagging and faceted search across domains. We demonstrate these CINERGI components in three types of user scenarios: (1) improving existing metadata descriptions maintained by government and academic data facilities, (2) supporting work of several EarthCube Research Coordination Network projects in assembling information resources for their domains

  12. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  13. Metadata

    CERN Document Server

    Zeng, Marcia Lei

    2016-01-01

    Metadata remains the solution for describing the explosively growing, complex world of digital information, and continues to be of paramount importance for information professionals. Providing a solid grounding in the variety and interrelationships among different metadata types, Zeng and Qin's thorough revision of their benchmark text offers a comprehensive look at the metadata schemas that exist in the world of library and information science and beyond, as well as the contexts in which they operate. Cementing its value as both an LIS text and a handy reference for professionals already in the field, this book: * Lays out the fundamentals of metadata, including principles of metadata, structures of metadata vocabularies, and metadata descriptions * Surveys metadata standards and their applications in distinct domains and for various communities of metadata practice * Examines metadata building blocks, from modelling to defining properties, and from designing application profiles to implementing value vocabu...

  14. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group

    2007-12-01

    QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org.

  15. Creating metadata that work for digital libraries and Google

    OpenAIRE

    Dawson, Alan

    2004-01-01

    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google us...

  16. Science friction: data, metadata, and collaboration.

    Science.gov (United States)

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  17. Are tags from Mars and descriptors from Venus? A study on the ecology of educational resource metadata

    NARCIS (Netherlands)

    Vuorikari, Riina; Sillaots, Martin; Panzavolta, Silvia; Koper, Rob

    2009-01-01

    Vuorikari, R., Sillaots, M., Panzavolta, S. & Koper, R. (2009). Are tags from Mars and descriptors from Venus? A study on the ecology of educational resource metadata. In M. Spaniol, Q. Li, R. Klamma & R. W. H. Lau (Eds.), Proceedings of the 8th International Conference Advances in Web Based

  18. Streamlining geospatial metadata in the Semantic Web

    Science.gov (United States)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  19. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and ...

  20. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  1. Context-Adaptive Learning Designs by Using Semantic Web Services

    Science.gov (United States)

    Dietze, Stefan; Gugliotta, Alessio; Domingue, John

    2007-01-01

    IMS Learning Design (IMS-LD) is a promising technology aimed at supporting learning processes. IMS-LD packages contain the learning process metadata as well as the learning resources. However, the allocation of resources--whether data or services--within the learning design is done manually at design-time on the basis of the subjective appraisals…

  2. ORGANIZATION OF DIGITAL RESOURCES IN REPEC THROUGH REDIF METADATA

    Directory of Open Access Journals (Sweden)

    Salvador Enrique Vazquez Moctezuma

    2018-06-01

    Full Text Available Introduction: The disciplinary repository RePEc (Research Papers in Economics provides access to a wide range of preprints, journal articles, books, book chapters and software about economic and administrative sciences. This repository adds bibliographic records produced by different universities, institutes, editors and authors that work collaboratively following the norms of the documentary organization. Objective: In this paper, mainly, we identify and analyze the functioning of RePEc, which includes the organization of the files, which is characterized using the protocol Guildford and metadata ReDIF (Research Documentation Information Format templates own for the documentary description. Methodology: Part of this research was studied theoretically in the literature; another part was carried out by observing a series of features visible on the RePEc website and in the archives of a journal that collaborates in this repository. Results: The repository is a decentralized collaborative project and it also provides several services derived from the metadata analysis. Conclusions: We conclude that the ReDIF templates and the Guildford communication protocol are key elements for organizing records in RePEc, and there is a similarity with the Dublin Core metadata

  3. Metadata

    CERN Document Server

    Pomerantz, Jeffrey

    2015-01-01

    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades i...

  4. FSA 2002 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural ResourcesMetadata for the 2002 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad...

  5. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    Science.gov (United States)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  6. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  7. A Technical Mode for Sharing and Utilizing Open Educational Resources in Chinese Universities

    Directory of Open Access Journals (Sweden)

    Juan Yang

    2011-09-01

    Full Text Available Open educational resources just supply potentials to help equalize the access to worldwide knowledge and education, but themselves alone do not cause effective learning or education. How to make effective use of the resources is still a big challenge. In this study, a technical mode is proposed to collect the open educational resources from different sources on the Internet into a campus-network-based resource management system. The system facilitates free and easy access to the resources for instructors and students in universities and integrates the resources into learning and teaching. The technical issues regarding the design the resource management system are examined, including the structure and functions of the system, metadata standard compatibility and scalability, metadata file format, and resource utilization assessment. Furthermore, the resource collecting, storage and utilization modes are also discussed so as to lay a technical basis for extensive and efficient sharing and utilization of the OER in Chinese universities.

  8. FSA 2003-2004 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural ResourcesMetadata for the 2003-2004 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the...

  9. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  10. USGS Digital Orthophoto Quad (DOQ) Metadata

    Data.gov (United States)

    Minnesota Department of Natural ResourcesMetadata for the USGS DOQ Orthophoto Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad tile...

  11. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    Science.gov (United States)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  12. The Effects of Discipline on the Application of Learning Object Metadata in UK Higher Education: The Case of the Jorum Repository

    Science.gov (United States)

    Balatsoukas, Panos; O'Brien, Ann; Morris, Anne

    2011-01-01

    Introduction: This paper reports on the findings of a study investigating the potential effects of discipline (sciences and engineering versus humanities and social sciences) on the application of the Institute of Electrical and Electronic Engineers learning object metadata elements for the description of learning objects in the Jorum learning…

  13. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  14. The Theory and Implementation for Metadata in Digital Library/Museum

    Directory of Open Access Journals (Sweden)

    Hsueh-hua Chen

    1998-12-01

    Full Text Available Digital Libraries and Museums (DL/M have become one of the important research issues of Library and Information Science as well as other related fields. This paper describes the basic concepts of DL/M and briefly introduces the development of Taiwan Digital Museum Project. Based on the features of various collections, wediscuss how to maintain, to manage and to exchange metadata, especially from the viewpoint of users. We propose the draft of metadata, MICI (Metadata Interchange for Chinese Information , developed by ROSS (Resources Organization and SearchingSpecification team. Finally, current problems and future development of metadata will be touched.[Article content in Chinese

  15. Building a Disciplinary Metadata Standards Directory

    Directory of Open Access Journals (Sweden)

    Alexander Ball

    2014-07-01

    Full Text Available The Research Data Alliance (RDA Metadata Standards Directory Working Group (MSDWG is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.

  16. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  17. Managing ebook metadata in academic libraries taming the tiger

    CERN Document Server

    Frederick, Donna E

    2016-01-01

    Managing ebook Metadata in Academic Libraries: Taming the Tiger tackles the topic of ebooks in academic libraries, a trend that has been welcomed by students, faculty, researchers, and library staff. However, at the same time, the reality of acquiring ebooks, making them discoverable, and managing them presents library staff with many new challenges. Traditional methods of cataloging and managing library resources are no longer relevant where the purchasing of ebooks in packages and demand driven acquisitions are the predominant models for acquiring new content. Most academic libraries have a complex metadata environment wherein multiple systems draw upon the same metadata for different purposes. This complexity makes the need for standards-based interoperable metadata more important than ever. In addition to complexity, the nature of the metadata environment itself typically varies slightly from library to library making it difficult to recommend a single set of practices and procedures which would be releva...

  18. Resources and Resourcefulness in Language Teaching and Learning

    African Journals Online (AJOL)

    Attempts will be made in this paper to examine what we mean by language, language teaching and learning, resources and resourcefulness in language teaching and learning and the benefit of teachers being resourceful in language teaching and learning to both the learners, the teachers, the society and the nation at ...

  19. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    Science.gov (United States)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  20. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    Energy Technology Data Exchange (ETDEWEB)

    Devarakonda, Ranjeet [ORNL; Shrestha, Biva [ORNL; Palanisamy, Giri [ORNL; Hook, Leslie A [ORNL; Killeffer, Terri S [ORNL; Boden, Thomas A [ORNL; Cook, Robert B [ORNL; Zolly, Lisa [United States Geological Service (USGS); Hutchison, Viv [United States Geological Service (USGS); Frame, Mike [United States Geological Service (USGS); Cialella, Alice [Brookhaven National Laboratory (BNL); Lazer, Kathy [Brookhaven National Laboratory (BNL)

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  1. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    Science.gov (United States)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  2. Metadata Authoring with Versatility and Extensibility

    Science.gov (United States)

    Pollack, Janine; Olsen, Lola

    2004-01-01

    NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.

  3. Metadata and Service at the GFZ ISDC Portal

    Science.gov (United States)

    Ritschel, B.

    2008-05-01

    an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.

  4. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  5. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.; Tzelnic, Percy; Ting, Dennis P. J.; Ionkov, Latchesar A.; Grider, Gary

    2017-12-26

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata request can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.

  6. USGS 24k Digital Raster Graphic (DRG) Metadata

    Data.gov (United States)

    Minnesota Department of Natural ResourcesMetadata for the scanned USGS 24k Topograpic Map Series (also known as 24k Digital Raster Graphic). Each scanned map is represented by a polygon in the layer and the...

  7. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    Science.gov (United States)

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  8. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  9. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's effort to continually evolve its data systems led ECHO to enhancing the method in which it receives inventory metadata from the data centers to allow for multiple metadata formats including ISO 19115. ECHO's metadata model will also be mapped to the NASA-specific convention for ingesting science metadata into the ECHO system. As NASA's new Earth Science missions and data centers are migrating to the ISO 19115 standards, EOSDIS is developing metadata management resources to assist in the reading, writing and parsing ISO 19115 compliant metadata. To foster interoperability with other agencies and international partners, NASA is working to ensure that a common ISO 19115 convention is developed, enhancing data sharing capabilities and other data analysis initiatives. NASA is also investigating the use of ISO 19115 standards to encode data quality, lineage and provenance with stored values. A common metadata standard across NASA's Earth Science data systems promotes interoperability, enhances data utilization and removes levels of uncertainty found in data products.

  10. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    Science.gov (United States)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  11. ATLAS Metadata Interface (AMI), a generic metadata framework

    Science.gov (United States)

    Fulachier, J.; Odier, J.; Lambert, F.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  12. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    AUTHOR|(SzGeCERN)573735; The ATLAS collaboration; Odier, Jerome; Lambert, Fabian

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  13. In Interactive, Web-Based Approach to Metadata Authoring

    Science.gov (United States)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  14. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    Science.gov (United States)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value

  15. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  16. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    the lessons learned. References: [1] R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. [2] R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010). [3] Devarakonda, R.; Palanisamy, G.; Green, J.; Wilson, B. E. "Mercury: An Example of Effective Software Reuse for Metadata Management Data Discovery and Access", Eos Trans. AGU, 89(53), Fall Meet. Suppl., IN11A-1019 (2008).

  17. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    Science.gov (United States)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  18. e-Learning Resource Brokers

    NARCIS (Netherlands)

    Retalis, Symeon; Papasalouros, Andreas; Avgeriou, Paris; Siassiakos, Kostas

    2004-01-01

    There is an exponentially increasing demand for provisioning of high-quality learning resources, which is not satisfied by current web technologies and systems. E-Learning Resource Brokers are a potential solution to this problem, as they represent the state-of-the-art in facilitating the exchange

  19. From Learning Object to Learning Cell: A Resource Organization Model for Ubiquitous Learning

    Science.gov (United States)

    Yu, Shengquan; Yang, Xianmin; Cheng, Gang; Wang, Minjuan

    2015-01-01

    This paper presents a new model for organizing learning resources: Learning Cell. This model is open, evolving, cohesive, social, and context-aware. By introducing a time dimension into the organization of learning resources, Learning Cell supports the dynamic evolution of learning resources while they are being used. In addition, by introducing a…

  20. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    Science.gov (United States)

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  1. A Comparative Study on Metadata Scheme of Chinese and American Open Data Platforms

    Directory of Open Access Journals (Sweden)

    Yang Sinan

    2018-01-01

    Full Text Available [Purpose/significance] Open government data is conducive to the rational development and utilization of data resources. It can encourage social innovation and promote economic development. Besides, in order to ensure effective utilization and social increment of open government data, high-quality metadata schemes is necessary. [Method/process] Firstly, this paper analyzed the related research of open government data at home and abroad. Then, it investigated the open metadata schemes of some Chinese main local governments’ data platforms, and made a comparison with the metadata standard of American open government data. [Result/conclusion] This paper reveals that there are some disadvantages about Chinese local government open data affect the use effect of open data, which including that different governments use different data metadata schemes, the description of data set is too simple for further utilization and usually presented in HTML Web page format with lower machine-readable. Therefore, our government should come up with a standardized metadata schemes by drawing on the international mature and effective metadata standard, to ensure the social needs of high quality and high value data.

  2. Flexible Authoring of Metadata for Learning : Assembling forms from a declarative data and view model

    OpenAIRE

    Enoksson, Fredrik

    2011-01-01

    With the vast amount of information in various formats that is produced today it becomes necessary for consumers ofthis information to be able to judge if it is relevant for them. One way to enable that is to provide information abouteach piece of information, i.e. provide metadata. When metadata is to be edited by a human being, a metadata editorneeds to be provided. This thesis describes the design and practical use of a configuration mechanism for metadataeditors called annotation profiles...

  3. Medical student use of digital learning resources.

    Science.gov (United States)

    Scott, Karen; Morris, Anne; Marais, Ben

    2018-02-01

    University students expect to use technology as part of their studies, yet health professional teachers can struggle with the change in student learning habits fuelled by technology. Our research aimed to document the learning habits of contemporary medical students during a clinical rotation by exploring the use of locally and externally developed digital and print self-directed learning resources, and study groups. We investigated the learning habits of final-stage medical students during their clinical paediatric rotation using mixed methods, involving learning analytics and a student questionnaire. Learning analytics tracked aggregate student usage statistics of locally produced e-learning resources on two learning management systems and mobile learning resources. The questionnaire recorded student-reported use of digital and print learning resources and study groups. The students made extensive use of digital self-directed learning resources, especially in the 2 weeks before the examination, which peaked the day before the written examination. All students used locally produced digital formative assessment, and most (74/98; 76%) also used digital resources developed by other institutions. Most reported finding locally produced e-learning resources beneficial for learning. In terms of traditional forms of self-directed learning, one-third (28/94; 30%) indicated that they never read the course textbook, and few students used face-to-face 39/98 (40%) or online 6/98 (6%) study groups. Learning analytics and student questionnaire data confirmed the extensive use of digital resources for self-directed learning. Through clarification of learning habits and experiences, we think teachers can help students to optimise effective learning strategies; however, the impact of contemporary learning habits on learning efficacy requires further evaluation. Health professional teachers can struggle with the change in student learning habits fuelled by technology. © 2017 John

  4. Designing Learning Resources in Synchronous Learning Environments

    DEFF Research Database (Denmark)

    Christiansen, Rene B

    2015-01-01

    Computer-mediated Communication (CMC) and synchronous learning environments offer new solutions for teachers and students that transcend the singular one-way transmission of content knowledge from teacher to student. CMC makes it possible not only to teach computer mediated but also to design...... and create new learning resources targeted to a specific group of learners. This paper addresses the possibilities of designing learning resources within synchronous learning environments. The empirical basis is a cross-country study involving students and teachers in primary schools in three Nordic...... Countries (Denmark, Sweden and Norway). On the basis of these empirical studies a set of design examples is drawn with the purpose of showing how the design fulfills the dual purpose of functioning as a remote, synchronous learning environment and - using the learning materials used and recordings...

  5. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  6. METADATA, DESKRIPSI SERTA TITIK AKSESNYA DAN INDOMARC

    Directory of Open Access Journals (Sweden)

    Sulistiyo Basuki

    2012-07-01

    Full Text Available lstilah metadata mulai sering muncul dalam literature tentang database management systems (DBMS pada tahun 1980 an. lstilah tersebut digunakan untuk menggambarkan informasi yang diperlukan untuk mencatat karakteristik informasi yang terdapat pada pangkalan data. Banyak sumber yang mengartikan istilah metadata. Metadata dapat diartikan sumber, menunjukan lokasi dokumen, serta memberikan ringkasan yang diperlukan untuk memanfaat-kannya. Secara umum ada 3 bagian yang digunakan untuk membuat metadata sebagai sebuah paket informasi, dan penyandian (encoding pembuatan deskripsi paket informasi, dan penyediaan akses terhadap deskripsi tersebut. Dalam makalah ini diuraikan mengenai konsep data dalam kaitannya dengan perpustakaan. Uraian meliputi definisi metadata; fungsi metadata; standar penyandian (encoding, cantuman bibliografis. surogat, metadata; penciptaan isi cantuman surogat; ancangan terhadap format metadata; serta metadata dan standar metadata.

  7. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  8. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  9. Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes

    NARCIS (Netherlands)

    Sampson, Demetrios; Fytros, Demetrios

    2008-01-01

    Sampson, D., & Fytros, D. (2008). Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes. In P. Diaz, Kinshuk, I. Aedo & E. Mora (Eds.), Proceedings of the 8th IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 288-292. July,

  10. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    Science.gov (United States)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  11. Managing Human Resource Learning for Innovation

    DEFF Research Database (Denmark)

    Nielsen, Peter

    Managing human resource learning for innovation develops a systemic understanding of building innovative capabilities. Building innovative capabilities require active creation, coordination and absorption of useful knowledge and thus a cohesive management approach to learning. Often learning...... in organizations and work is approached without considerations on how to integrate it in the management of human resources. The book investigates the empirical conditions for managing human resources learning for innovation. With focus on innovative performance the importance of modes of innovation, clues...

  12. The RBV metadata catalog

    Science.gov (United States)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  13. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    Science.gov (United States)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  14. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    Science.gov (United States)

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  15. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    Science.gov (United States)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  16. Training and Best Practice Guidelines: Implications for Metadata Creation

    Science.gov (United States)

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  17. A Comparison of Educational Statistics and Data Mining Approaches to Identify Characteristics That Impact Online Learning

    Science.gov (United States)

    Miller, L. Dee; Soh, Leen-Kiat; Samal, Ashok; Kupzyk, Kevin; Nugent, Gwen

    2015-01-01

    Learning objects (LOs) are important online resources for both learners and instructors and usage for LOs is growing. Automatic LO tracking collects large amounts of metadata about individual students as well as data aggregated across courses, learning objects, and other demographic characteristics (e.g. gender). The challenge becomes identifying…

  18. ORGANIZATION OF BIODIVERSITY RESOURCES BASED ON THE PROCESS OF THEIR CREATION AND THE ROLE OF INDIVIDUAL ORGANISMS AS RESOURCE RELATIONSHIP NODES

    Directory of Open Access Journals (Sweden)

    Steven J Baskauf

    2010-06-01

    Full Text Available Abstract. - Kinds of occurrences (evidence of particular living organisms can be grouped by common data and metadata characteristics that are determined by the way that the occurrence represents the organism. The creation of occurrence resources follows a pattern which can be used as the basis for organizing both the metadata associated with those resources and the relationships among the resources. The central feature of this organizational system is a resource representing the individual organism. This resource serves as a node which connects the organism's occurrences and any determinations of the organism's taxonomic identity. I specify a relatively small number of predicates which can define the important relationships among these resources and suggest which metadata properties should logically be associated with each kind of resource.

  19. RDA: Resource Description and Access: The new standard for metadata and resource discovery in the digital age

    Directory of Open Access Journals (Sweden)

    Carlo Bianchini

    2015-01-01

    Full Text Available RDA (Resource Description and Access is going to promote a great change. In fact, guidelines – rather than rules – are addressed to anyone wishes to describe and make accessible a cultural heritage collection or tout court a collection: librarians, archivists, curators and professionals in any other branch of knowledge. The work is organized in two parts: the former contains theoretical foundations of cataloguing (FRBR, ICP, semantic web and linked data, the latter a critical presentation of RDA guidelines. RDA aims to make possible creation of well-structured metadata for any kind of resources, reusable in any context and technological environment. RDA offers a “set of guidelines and instructions to create data for discovery of resources”. Guidelines stress four actions – to identify, to relate (from FRBR/FRAD user tasks and ICP, to represent and to discover – and a noun: resource. To identify entities of Group 1 and Group 2 of FRBR (Work, Expression, Manifestation, Item, Person, Family, Corporate Body; to relate entities of Group 1 and Group 2 of FRBR, by means of relationships. To enable users to represent and discover entities of Group 1 and Group 2 by means of their attributes and relationships. These last two actions are the reason of users’ searches, and users are the pinpoint of the process. RDA enables the discovery of recorded knowledge, that is any resource conveying information, any resources transmitting intellectual or artistic content by means of any kind of carrier and media. RDA is a content standard, not a display standard nor an encoding standard: it gives instructions to identify data and does not care about how display or encode data produced by guidelines. RDA requires an original approach, a metanoia, a deep change in the way we think about cataloguing. Innovations in RDA are many: it promotes interoperability between catalogs and other search tools, it adopts terminology and concepts of the Semantic Web, it

  20. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    Science.gov (United States)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  1. Tethys Acoustic Metadata Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  2. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  3. Twenty-first century metadata operations challenges, opportunities, directions

    CERN Document Server

    Lee Eden, Bradford

    2014-01-01

    It has long been apparent to academic library administrators that the current technical services operations within libraries need to be redirected and refocused in terms of both format priorities and human resources. A number of developments and directions have made this reorganization imperative, many of which have been accelerated by the current economic crisis. All of the chapters detail some aspect of technical services reorganization due to downsizing and/or reallocation of human resources, retooling professional and support staff in higher level duties and/or non-MARC metadata, ""value-a

  4. How libraries use publisher metadata

    Directory of Open Access Journals (Sweden)

    Steve Shadle

    2013-11-01

    Full Text Available With the proliferation of electronic publishing, libraries are increasingly relying on publisher-supplied metadata to meet user needs for discovery in library systems. However, many publisher/content provider staff creating metadata are unaware of the end-user environment and how libraries use their metadata. This article provides an overview of the three primary discovery systems that are used by academic libraries, with examples illustrating how publisher-supplied metadata directly feeds into these systems and is used to support end-user discovery and access. Commonly seen metadata problems are discussed, with recommendations suggested. Based on a series of presentations given in Autumn 2012 to the staff of a large publisher, this article uses the University of Washington Libraries systems and services as illustrative examples. Judging by the feedback received from these presentations, publishers (specifically staff not familiar with the big picture of metadata standards work would benefit from a better understanding of the systems and services libraries provide using the data that is created and managed by publishers.

  5. A DDI3.2 Style for Data and Metadata Extracted from SAS

    OpenAIRE

    Hoyle, Larry

    2014-01-01

    Earlier work by Wackerow and Hoyle has shown that DDI can be a useful medium for interchange of data and metadata among statistical packages. DDI 3.2 has new features which enhance this capability, such as the ability to use UserAttributePairs to represent custom attributes. The metadata from a statistical package can also be represented in DDI3.2 using several different styles – embedded in a StudyUnit, in a Resource Package, or in a set of Fragments. The DDI Documentation for a Fragment sta...

  6. Metadata Life Cycles, Use Cases and Hierarchies

    Directory of Open Access Journals (Sweden)

    Ted Habermann

    2018-05-01

    Full Text Available The historic view of metadata as “data about data” is expanding to include data about other items that must be created, used, and understood throughout the data and project life cycles. In this context, metadata might better be defined as the structured and standard part of documentation, and the metadata life cycle can be described as the metadata content that is required for documentation in each phase of the project and data life cycles. This incremental approach to metadata creation is similar to the spiral model used in software development. Each phase also has distinct users and specific questions to which they need answers. In many cases, the metadata life cycle involves hierarchies where latter phases have increased numbers of items. The relationships between metadata in different phases can be captured through structure in the metadata standard, or through conventions for identifiers. Metadata creation and management can be streamlined and simplified by re-using metadata across many records. Many of these ideas have been developed to various degrees in several Geoscience disciplines and are being used in metadata for documenting the integrated life cycle of environmental research in the Arctic, including projects, collection sites, and datasets.

  7. Active Marine Station Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Active Marine Station Metadata is a daily metadata report for active marine bouy and C-MAN (Coastal Marine Automated Network) platforms from the National Data...

  8. A metadata schema for data objects in clinical research.

    Science.gov (United States)

    Canham, Steve; Ohmann, Christian

    2016-11-24

    A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.

  9. Critical Metadata for Spectroscopy Field Campaigns

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2014-04-01

    Full Text Available A field spectroscopy metadata standard is defined as those data elements that explicitly document the spectroscopy dataset and field protocols, sampling strategies, instrument properties and environmental and logistical variables. Standards for field spectroscopy metadata affect the quality, completeness, reliability, and usability of datasets created in situ. Currently there is no standardized methodology for documentation of in situ spectroscopy data or metadata. This paper presents results of an international experiment comprising a web-based survey and expert panel evaluation that investigated critical metadata in field spectroscopy. The survey participants were a diverse group of scientists experienced in gathering spectroscopy data across a wide range of disciplines. Overall, respondents were in agreement about a core metadataset for generic campaign metadata, allowing for a prioritization of critical metadata elements to be proposed including those relating to viewing geometry, location, general target and sampling properties, illumination, instrument properties, reference standards, calibration, hyperspectral signal properties, atmospheric conditions, and general project details. Consensus was greatest among individual expert groups in specific application domains. The results allow the identification of a core set of metadata fields that enforce long term data storage and serve as a foundation for a metadata standard. This paper is part one in a series about the core elements of a robust and flexible field spectroscopy metadata standard.

  10. Semantic Metadata for Heterogeneous Spatial Planning Documents

    Science.gov (United States)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  11. From learning objects to learning activities

    DEFF Research Database (Denmark)

    Dalsgaard, Christian

    2005-01-01

    This paper discusses and questions the current metadata standards for learning objects from a pedagogical point of view. From a social constructivist approach, the paper discusses how learning objects can support problem based, self-governed learning activities. In order to support this approach......, it is argued that it is necessary to focus on learning activities rather than on learning objects. Further, it is argued that descriptions of learning objectives and learning activities should be separated from learning objects. The paper presents a new conception of learning objects which supports problem...... based, self-governed activities. Further, a new way of thinking pedagogy into learning objects is introduced. It is argued that a lack of pedagogical thinking in learning objects is not solved through pedagogical metadata. Instead, the paper suggests the concept of references as an alternative...

  12. An Examination of the Adoption of Preservation Metadata in Cultural Heritage Institutions: An Exploratory Study Using Diffusion of Innovations Theory

    Science.gov (United States)

    Alemneh, Daniel Gelaw

    2009-01-01

    Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…

  13. A renaissance in library metadata? The importance of community collaboration in a digital world

    Directory of Open Access Journals (Sweden)

    Sarah Bull

    2016-07-01

    Full Text Available This article summarizes a presentation given by Sarah Bull as part of the Association of Learned and Professional Society Publishers (ALPSP seminar ‘Setting the Standard’ in November 2015. Representing the library community at the wide-ranging seminar, Sarah was tasked with making the topic of library metadata an engaging and informative one for a largely publisher audience. With help from co-author Amanda Quimby, this article is an attempt to achieve the same aim! It covers the importance of library metadata and standards in the supply chain and also reflects on the role of the community in successful standards development and maintenance. Special emphasis is given to the importance of quality in e-book metadata and the need for publisher and library collaboration to improve discovery, usage and the student experience. The article details the University of Birmingham experience of e-book metadata from a workflow perspective to highlight the complex integration issues which remain between content procurement and discovery.

  14. Exploring Characterizations of Learning Object Repositories Using Data Mining Techniques

    Science.gov (United States)

    Segura, Alejandra; Vidal, Christian; Menendez, Victor; Zapata, Alfredo; Prieto, Manuel

    Learning object repositories provide a platform for the sharing of Web-based educational resources. As these repositories evolve independently, it is difficult for users to have a clear picture of the kind of contents they give access to. Metadata can be used to automatically extract a characterization of these resources by using machine learning techniques. This paper presents an exploratory study carried out in the contents of four public repositories that uses clustering and association rule mining algorithms to extract characterizations of repository contents. The results of the analysis include potential relationships between different attributes of learning objects that may be useful to gain an understanding of the kind of resources available and eventually develop search mechanisms that consider repository descriptions as a criteria in federated search.

  15. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  16. Detection of Vandalism in Wikipedia using Metadata Features – Implementation in Simple English and Albanian sections

    Directory of Open Access Journals (Sweden)

    Arsim Susuri

    2017-03-01

    Full Text Available In this paper, we evaluate a list of classifiers in order to use them in the detection of vandalism by focusing on metadata features. Our work is focused on two low resource data sets (Simple English and Albanian from Wikipedia. The aim of this research is to prove that this form of vandalism detection applied in one data set (language can be extended into another data set (language. Article views data sets in Wikipedia have been used rarely for the purpose of detecting vandalism. We will show the benefits of using article views data set with features from the article revisions data set with the aim of improving the detection of vandalism. The key advantage of using metadata features is that these metadata features are language independent and simple to extract because they require minimal processing. This paper shows that application of vandalism models across low resource languages is possible, and vandalism can be detected through view patterns of articles.

  17. Geo-Enrichment and Semantic Enhancement of Metadata Sets to Augment Discovery in Geoportals

    Directory of Open Access Journals (Sweden)

    Bernhard Vockner

    2014-03-01

    Full Text Available Geoportals are established to function as main gateways to find, evaluate, and start “using” geographic information. Still, current geoportal implementations face problems in optimizing the discovery process due to semantic heterogeneity issues, which leads to low recall and low precision in performing text-based searches. Therefore, we propose an enhanced semantic discovery approach that supports multilingualism and information domain context. Thus, we present workflow that enriches existing structured metadata with synonyms, toponyms, and translated terms derived from user-defined keywords based on multilingual thesauri and ontologies. To make the results easier and understandable, we also provide automated translation capabilities for the resource metadata to support the user in conceiving the thematic content of the descriptive metadata, even if it has been documented using a language the user is not familiar with. In addition, to text-enable spatial filtering capabilities, we add additional location name keywords to metadata sets. These are based on the existing bounding box and shall tweak discovery scores when performing single text line queries. In order to improve the user’s search experience, we tailor faceted search strategies presenting an enhanced query interface for geo-metadata discovery that are transparently leveraging the underlying thesauri and ontologies.

  18. Mdmap: A Tool for Metadata Collection and Matching

    Directory of Open Access Journals (Sweden)

    Rico Simke

    2014-10-01

    Full Text Available This paper describes a front-end for the semi-automatic collection, matching, and generation of bibliographic metadata obtained from different sources for use within a digitization architecture. The Library of a Billion Words project is building an infrastructure for digitizing text that requires high-quality bibliographic metadata, but currently only sparse metadata from digitized editions is available. The project’s approach is to collect metadata for each digitized item from as many sources as possible. An expert user can then use an intuitive front-end tool to choose matching metadata. The collected metadata are centrally displayed in an interactive grid view. The user can choose which metadata they want to assign to a certain edition, and export these data as MARCXML. This paper presents a new approach to bibliographic work and metadata correction. We try to achieve a high quality of the metadata by generating a large amount of metadata to choose from, as well as by giving librarians an intuitive tool to manage their data.

  19. SEMANTIC METADATA FOR HETEROGENEOUS SPATIAL PLANNING DOCUMENTS

    Directory of Open Access Journals (Sweden)

    A. Iwaniak

    2016-09-01

    Full Text Available Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa. The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  20. OAI-PMH for resource harvesting, tutorial 2

    CERN Multimedia

    CERN. Geneva; Nelson, Michael

    2005-01-01

    A variety of examples have arisen in which the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been used for applications beyond bibliographic metadata interchange. One of these examples is the use of OAI-PMH to harvest resources and not just metadata. Advanced resource discovery and preservations capabilities are possible by combining complex object formats such as MPEG-21 DIDL, METS and SCORM with the OAI-PMH. In this tutorial, we review community conventions and practices that have provided the impetus for resource harvesting. We show how the introduction of complex object formats for the representation of resources leads to a robust, OAI-PMH-based framework for resource harvesting. We detail how complex object formats fit in the OAI-PMH data model, and how (compound) digital objects can be represented using a complex object format for exposure by an OAI-PMH repository. We also cover tools that are available for the implementation of an OAI-PMH-based resource harvesting solution. Fu...

  1. Integrative learning for practicing adaptive resource management

    Directory of Open Access Journals (Sweden)

    Craig A. McLoughlin

    2015-03-01

    Full Text Available Adaptive resource management is a learning-by-doing approach to natural resource management. Its effective practice involves the activation, completion, and regeneration of the "adaptive management cycle" while working toward achieving a flexible set of collaboratively identified objectives. This iterative process requires application of single-, double-, and triple-loop learning, to strategically modify inputs, outputs, assumptions, and hypotheses linked to improving policies, management strategies, and actions, along with transforming governance. Obtaining an appropriate balance between these three modes of learning has been difficult to achieve in practice and building capacity in this area can be achieved through an emphasis on reflexive learning, by employing adaptive feedback systems. A heuristic reflexive learning framework for adaptive resource management is presented in this manuscript. It is built on the conceptual pillars of the following: stakeholder driven adaptive feedback systems; strategic adaptive management (SAM; and hierarchy theory. The SAM Reflexive Learning Framework (SRLF emphasizes the types, roles, and transfer of information within a reflexive learning context. Its adaptive feedback systems enhance the facilitation of single-, double-, and triple-loop learning. Focus on the reflexive learning process is further fostered by streamlining objectives within and across all governance levels; incorporating multiple interlinked adaptive management cycles; having learning as an ongoing, nested process; recognizing when and where to employ the three-modes of learning; distinguishing initiating conditions for this learning; and contemplating practitioner mandates for this learning across governance levels. The SRLF is a key enabler for implementing the "adaptive management cycle," and thereby translating the theory of adaptive resource management into practice. It promotes the heuristics of adaptive management within a cohesive

  2. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  3. Optimising metadata workflows in a distributed information environment

    OpenAIRE

    Robertson, R. John; Barton, Jane

    2005-01-01

    The different purposes present within a distributed information environment create the potential for repositories to enhance their metadata by capitalising on the diversity of metadata available for any given object. This paper presents three conceptual reference models required to achieve this optimisation of metadata workflow: the ecology of repositories, the object lifecycle model, and the metadata lifecycle model. It suggests a methodology for developing the metadata lifecycle model, and ...

  4. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    Science.gov (United States)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  5. eLearning resources to supplement postgraduate neurosurgery training.

    Science.gov (United States)

    Stienen, Martin N; Schaller, Karl; Cock, Hannah; Lisnic, Vitalie; Regli, Luca; Thomson, Simon

    2017-02-01

    In an increasingly complex and competitive professional environment, improving methods to educate neurosurgical residents is key to ensure high-quality patient care. Electronic (e)Learning resources promise interactive knowledge acquisition. We set out to give a comprehensive overview on available eLearning resources that aim to improve postgraduate neurosurgical training and review the available literature. A MEDLINE query was performed, using the search term "electronic AND learning AND neurosurgery". Only peer-reviewed English-language articles on the use of any means of eLearning to improve theoretical knowledge in postgraduate neurosurgical training were included. Reference lists were crosschecked for further relevant articles. Captured parameters were the year, country of origin, method of eLearning reported, and type of article, as well as its conclusion. eLearning resources were additionally searched for using Google. Of n = 301 identified articles by the MEDLINE search, n = 43 articles were analysed in detail. Applying defined criteria, n = 28 articles were excluded and n = 15 included. Most articles were generated within this decade, with groups from the USA, the UK and India having a leadership role. The majority of articles reviewed existing eLearning resources, others reported on the concept, development and use of generated eLearning resources. There was no article that scientifically assessed the effectiveness of eLearning resources (against traditional learning methods) in terms of efficacy or costs. Only one article reported on satisfaction rates with an eLearning tool. All authors of articles dealing with eLearning and the use of new media in neurosurgery uniformly agreed on its great potential and increasing future use, but most also highlighted some weaknesses and possible dangers. This review found only a few articles dealing with the modern aspects of eLearning as an adjunct to postgraduate neurosurgery training. Comprehensive

  6. Improving the accessibility and re-use of environmental models through provision of model metadata - a scoping study

    Science.gov (United States)

    Riddick, Andrew; Hughes, Andrew; Harpham, Quillon; Royse, Katherine; Singh, Anubha

    2014-05-01

    There has been an increasing interest both from academic and commercial organisations over recent years in developing hydrologic and other environmental models in response to some of the major challenges facing the environment, for example environmental change and its effects and ensuring water resource security. This has resulted in a significant investment in modelling by many organisations both in terms of financial resources and intellectual capital. To capitalise on the effort on producing models, then it is necessary for the models to be both discoverable and appropriately described. If this is not undertaken then the effort in producing the models will be wasted. However, whilst there are some recognised metadata standards relating to datasets these may not completely address the needs of modellers regarding input data for example. Also there appears to be a lack of metadata schemes configured to encourage the discovery and re-use of the models themselves. The lack of an established standard for model metadata is considered to be a factor inhibiting the more widespread use of environmental models particularly the use of linked model compositions which fuse together hydrologic models with models from other environmental disciplines. This poster presents the results of a Natural Environment Research Council (NERC) funded scoping study to understand the requirements of modellers and other end users for metadata about data and models. A user consultation exercise using an on-line questionnaire has been undertaken to capture the views of a wide spectrum of stakeholders on how they are currently managing metadata for modelling. This has provided a strong confirmation of our original supposition that there is a lack of systems and facilities to capture metadata about models. A number of specific gaps in current provision for data and model metadata were also identified, including a need for a standard means to record detailed information about the modelling

  7. Leveraging Python to improve ebook metadata selection, ingest, and management

    Directory of Open Access Journals (Sweden)

    Kelly Thompson

    2017-10-01

    Full Text Available Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.

  8. Intelligent Discovery for Learning Objects Using Semantic Web Technologies

    Science.gov (United States)

    Hsu, I-Ching

    2012-01-01

    The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…

  9. Metadata in Scientific Dialects

    Science.gov (United States)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  10. Metadata Access Tool for Climate and Health

    Science.gov (United States)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  11. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    Science.gov (United States)

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  12. The Machinic Temporality of Metadata

    Directory of Open Access Journals (Sweden)

    Claudio Celis

    2015-03-01

    Full Text Available In 1990 Deleuze introduced the hypothesis that disciplinary societies are gradually being replaced by a new logic of power: control. Accordingly, Matteo Pasquinelli has recently argued that we are moving towards societies of metadata, which correspond to a new stage of what Deleuze called control societies. Societies of metadata are characterised for the central role that meta-information acquires both as a source of surplus value and as an apparatus of social control. The aim of this article is to develop Pasquinelli’s thesis by examining the temporal scope of these emerging societies of metadata. In particular, this article employs Guattari’s distinction between human and machinic times. Through these two concepts, this article attempts to show how societies of metadata combine the two poles of capitalist power formations as identified by Deleuze and Guattari, i.e. social subjection and machinic enslavement. It begins by presenting the notion of metadata in order to identify some of the defining traits of contemporary capitalism. It then examines Berardi’s account of the temporality of the attention economy from the perspective of the asymmetric relation between cyber-time and human time. The third section challenges Berardi’s definition of the temporality of the attention economy by using Guattari’s notions of human and machinic times. Parts four and five fall back upon Deleuze and Guattari’s notions of machinic surplus labour and machinic enslavement, respectively. The concluding section tries to show that machinic and human times constitute two poles of contemporary power formations that articulate the temporal dimension of societies of metadata.

  13. Incorporating ISO Metadata Using HDF Product Designer

    Science.gov (United States)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  14. Evaluating the privacy properties of telephone metadata

    Science.gov (United States)

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  15. Pupil Science Learning in Resource-Based e-Learning Environments

    Science.gov (United States)

    So, Wing-mui Winnie; Ching, Ngai-ying Fiona

    2011-01-01

    With the rapid expansion of broadband Internet connection and availability of high performance yet low priced computers, many countries around the world are advocating the adoption of e-learning, the use of computer technology to improve learning and teaching. The trend of e-learning has urged many teachers to incorporate online resources in their…

  16. U.S. EPA Metadata Editor (EME)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Metadata Editor (EME) allows users to create geospatial metadata that meets EPA's requirements. The tool has been developed as a desktop application that...

  17. Making Interoperability Easier with the NASA Metadata Management Tool

    Science.gov (United States)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  18. Exploiting Dark Information Resources to Create New Value Added Services to Study Earth Science Phenomena

    Science.gov (United States)

    Ramachandran, Rahul; Maskey, Manil; Li, Xiang; Bugbee, Kaylin

    2017-01-01

    This paper presents two research applications exploiting unused metadata resources in novel ways to aid data discovery and exploration capabilities. The results based on the experiments are encouraging and each application has the potential to serve as a useful standalone component or service in a data system. There were also some interesting lessons learned while designing the two applications and these are presented next.

  19. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  20. The PDS4 Metadata Management System

    Science.gov (United States)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  1. Learning Method, Facilities And Infrastructure, And Learning Resources In Basic Networking For Vocational School

    OpenAIRE

    Pamungkas, Bian Dwi

    2017-01-01

    This study aims to examine the contribution of learning methods on learning output, the contribution of facilities and infrastructure on output learning, the contribution of learning resources on learning output, and the contribution of learning methods, the facilities and infrastructure, and learning resources on learning output. The research design is descriptive causative, using a goal-oriented assessment approach in which the assessment focuses on assessing the achievement of a goal. The ...

  2. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  3. Data, Metadata, and Ted

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    Ted Nelson coined the term “hypertext” and developed Xanadu in a universe parallel to the one in which librarians, archivists, and documentalists were creating metadata to establish cross-connections among the myriad topics of this world. When these universes collided, comets exploded as ontologies proliferated. Black holes were formed as data disappeared through lack of description. Today these universes coexist, each informing the other, if not always happily: the formal rules of metadata, ...

  4. There's Trouble in Paradise: Problems with Educational Metadata Encountered during the MALTED Project.

    Science.gov (United States)

    Monthienvichienchai, Rachada; Sasse, M. Angela; Wheeldon, Richard

    This paper investigates the usability of educational metadata schemas with respect to the case of the MALTED (Multimedia Authoring Language Teachers and Educational Developers) project at University College London (UCL). The project aims to facilitate authoring of multimedia materials for language learning by allowing teachers to share multimedia…

  5. Pembuatan Aplikasi Metadata Generator untuk Koleksi Peninggalan Warisan Budaya

    Directory of Open Access Journals (Sweden)

    Wimba Agra Wicesa

    2017-03-01

    Full Text Available Warisan budaya merupakan suatu aset penting yang digunakan sebagai sumber informasi dalam mempelajari ilmu sejarah. Mengelola data warisan budaya menjadi suatu hal yang harus diperhatikan guna menjaga keutuhan data warisan budaya di masa depan. Menciptakan sebuah metadata warisan budaya merupakan salah satu langkah yang dapat diambil untuk menjaga nilai dari sebuah artefak. Dengan menggunakan konsep metadata, informasi dari setiap objek warisan budaya tersebut menjadi mudah untuk dibaca, dikelola, maupun dicari kembali meskipun telah tersimpan lama. Selain itu dengan menggunakan konsep metadata, informasi tentang warisan budaya dapat digunakan oleh banyak sistem. Metadata warisan budaya merupakan metadata yang cukup besar. Sehingga untuk membangun metada warisan budaya dibutuhkan waktu yang cukup lama. Selain itu kesalahan (human error juga dapat menghambat proses pembangunan metadata warisan budaya. Proses pembangkitan metadata warisan budaya melalui Aplikasi Metadata Generator menjadi lebih cepat dan mudah karena dilakukan secara otomatis oleh sistem. Aplikasi ini juga dapat menekan human error sehingga proses pembangkitan menjadi lebih efisien.

  6. Parenting styles and learned resourcefulness of Turkish adolescents.

    Science.gov (United States)

    Türkel, Yeşim Deniz; Tezer, Esin

    2008-01-01

    This study investigated the differences among 834 high school students regarding learned resourcefulness in terms of perceived parenting style and gender. The data were gathered by administering the Parenting Style Inventory (PSI) and Rosenbaum's Self-Control Schedule (SCS). The results of ANOVA pertaining to the scores of learned resourcefulness yielded a significant main effect for parenting style groups. Neither the main effect for gender nor the gender and parenting style interaction effect was significant. The findings suggest that those who perceived their parents as authoritative had a relatively high level of learned resourcefulness as compared to those who perceived their parents as neglectful and authoritarian. Findings also indicated that those who perceived their parents as indulgent had a higher level of learned resourcefulness than those who perceived their parents as neglectful and authoritarian.

  7. Towards the Sigma Online Learning Model for crowdsourced recommendations of good web-based learning resources

    OpenAIRE

    Aaberg, Robin Garen

    2016-01-01

    The web based learning resources is believed to be playing an active role in the learning environment of higher education today. This qualitative study is exploring how students at Bergen University College incorporate web-based learning resources in their learning activities. At the core of this research is the problem of retrieving good web-resources after their first discovery. Usefull and knowledge granting web-resources are discovered within a context of topics, objectives. It is here ar...

  8. Combined use of semantics and metadata to manage Research Data Life Cycle in Environmental Sciences

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida

    2017-04-01

    The use of metadata to contextualize datasets is quite extended in Earth System Sciences. There are some initiatives and available tools to help data managers to choose the best metadata standard that fit their use cases, like the DCC Metadata Directory (http://www.dcc.ac.uk/resources/metadata-standards). In our use case, we have been gathering physical, chemical and biological data from a water reservoir since 2010. A well metadata definition is crucial not only to contextualize our own data but also to integrate datasets from other sources like satellites or meteorological agencies. That is why we have chosen EML (Ecological Metadata Language), which integrates many different elements to define a dataset, including the project context, instrumentation and parameters definition, and the software used to process, provide quality controls and include the publication details. Those metadata elements can contribute to help both human and machines to understand and process the dataset. However, the use of metadata is not enough to fully support the data life cycle, from the Data Management Plan definition to the Publication and Re-use. To do so, we need to define not only metadata and attributes but also the relationships between them, so semantics are needed. Ontologies, being a knowledge representation, can contribute to define the elements of a research data life cycle, including DMP, datasets, software, etc. They also can define how the different elements are related between them and how they interact. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop an ontology we are using a graphical tool

  9. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  10. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  11. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    Science.gov (United States)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  12. Improving Metadata Compliance for Earth Science Data Records

    Science.gov (United States)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  13. On the Origin of Metadata

    Directory of Open Access Journals (Sweden)

    Sam Coppens

    2012-12-01

    Full Text Available Metadata has been around and has evolved for centuries, albeit not recognized as such. Medieval manuscripts typically had illuminations at the start of each chapter, being both a kind of signature for the author writing the script and a pictorial chapter anchor for the illiterates at the time. Nowadays, there is so much fragmented information on the Internet that users sometimes fail to distinguish the real facts from some bended truth, let alone being able to interconnect different facts. Here, the metadata can both act as noise-reductors for detailed recommendations to the end-users, as it can be the catalyst to interconnect related information. Over time, metadata thus not only has had different modes of information, but furthermore, metadata’s relation of information to meaning, i.e., “semantics”, evolved. Darwin’s evolutionary propositions, from “species have an unlimited reproductive capacity”, over “natural selection”, to “the cooperation of mutations leads to adaptation to the environment” show remarkable parallels to both metadata’s different modes of information and to its relation of information to meaning over time. In this paper, we will show that the evolution of the use of (metadata can be mapped to Darwin’s nine evolutionary propositions. As mankind and its behavior are products of an evolutionary process, the evolutionary process of metadata with its different modes of information is on the verge of a new-semantic-era.

  14. Health professional learner attitudes and use of digital learning resources.

    Science.gov (United States)

    Maloney, Stephen; Chamberlain, Michael; Morrison, Shane; Kotsanas, George; Keating, Jennifer L; Ilic, Dragan

    2013-01-16

    Web-based digital repositories allow educational resources to be accessed efficiently and conveniently from diverse geographic locations, hold a variety of resource formats, enable interactive learning, and facilitate targeted access for the user. Unlike some other learning management systems (LMS), resources can be retrieved through search engines and meta-tagged labels, and content can be streamed, which is particularly useful for multimedia resources. The aim of this study was to examine usage and user experiences of an online learning repository (Physeek) in a population of physiotherapy students. The secondary aim of this project was to examine how students prefer to access resources and which resources they find most helpful. The following data were examined using an audit of the repository server: (1) number of online resources accessed per day in 2010, (2) number of each type of resource accessed, (3) number of resources accessed during business hours (9 am to 5 pm) and outside business hours (years 1-4), (4) session length of each log-on (years 1-4), and (5) video quality (bit rate) of each video accessed. An online questionnaire and 3 focus groups assessed student feedback and self-reported experiences of Physeek. Students preferred the support provided by Physeek to other sources of educational material primarily because of its efficiency. Peak usage commonly occurred at times of increased academic need (ie, examination times). Students perceived online repositories as a potential tool to support lifelong learning and health care delivery. The results of this study indicate that today's health professional students welcome the benefits of online learning resources because of their convenience and usability. This represents a transition away from traditional learning styles and toward technological learning support and may indicate a growing link between social immersions in Internet-based connections and learning styles. The true potential for Web

  15. Collection Metadata Solutions for Digital Library Applications

    Science.gov (United States)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  16. Learning about water resource sharing through game play

    Directory of Open Access Journals (Sweden)

    T. Ewen

    2016-10-01

    Full Text Available Games are an optimal way to teach about water resource sharing, as they allow real-world scenarios to be enacted. Both students and professionals learning about water resource management can benefit from playing games, through the process of understanding both the complexity of sharing of resources between different groups and decision outcomes. Here we address how games can be used to teach about water resource sharing, through both playing and developing water games. An evaluation of using the web-based game Irrigania in the classroom setting, supported by feedback from several educators who have used Irrigania to teach about the sustainable use of water resources, and decision making, at university and high school levels, finds Irrigania to be an effective and easy tool to incorporate into a curriculum. The development of two water games in a course for masters students in geography is also presented as a way to teach and communicate about water resource sharing. Through game development, students learned soft skills, including critical thinking, problem solving, team work, and time management, and overall the process was found to be an effective way to learn about water resource decision outcomes. This paper concludes with a discussion of learning outcomes from both playing and developing water games.

  17. Learning about water resource sharing through game play

    Science.gov (United States)

    Ewen, Tracy; Seibert, Jan

    2016-10-01

    Games are an optimal way to teach about water resource sharing, as they allow real-world scenarios to be enacted. Both students and professionals learning about water resource management can benefit from playing games, through the process of understanding both the complexity of sharing of resources between different groups and decision outcomes. Here we address how games can be used to teach about water resource sharing, through both playing and developing water games. An evaluation of using the web-based game Irrigania in the classroom setting, supported by feedback from several educators who have used Irrigania to teach about the sustainable use of water resources, and decision making, at university and high school levels, finds Irrigania to be an effective and easy tool to incorporate into a curriculum. The development of two water games in a course for masters students in geography is also presented as a way to teach and communicate about water resource sharing. Through game development, students learned soft skills, including critical thinking, problem solving, team work, and time management, and overall the process was found to be an effective way to learn about water resource decision outcomes. This paper concludes with a discussion of learning outcomes from both playing and developing water games.

  18. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  19. Motivational Factors in Self-Directed Informal Learning from Online Learning Resources

    Science.gov (United States)

    Song, Donggil; Bonk, Curtis J.

    2016-01-01

    Learning is becoming more self-directed and informal with the support of emerging technologies. A variety of online resources have promoted informal learning by allowing people to learn on demand and just when needed. It is significant to understand self-directed informal learners' motivational aspects, their learning goals, obstacles, and…

  20. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  1. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    Science.gov (United States)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  2. Metadata Aided Run Selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2010-01-01

    Management of the large volume of data collected by any large scale sci- entific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...

  3. Metadata aided run selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...

  4. SAS- Semantic Annotation Service for Geoscience resources on the web

    Science.gov (United States)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  5. The Global Streamflow Indices and Metadata Archive (GSIM – Part 1: The production of a daily streamflow archive and metadata

    Directory of Open Access Journals (Sweden)

    H. X. Do

    2018-04-01

    Full Text Available This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM, a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections. It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477: (1 a GSIM catalogue collating basic metadata associated with each time series, (2 catchment boundaries for the contributing area of each gauge, and (3 catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  6. Prediction of Solar Eruptions Using Filament Metadata

    Science.gov (United States)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  7. A web-based, dynamic metadata interface to MDSplus

    International Nuclear Information System (INIS)

    Gardner, Henry J.; Karia, Raju; Manduchi, Gabriele

    2008-01-01

    We introduce the concept of a Fusion Data Grid and discuss the management of metadata within such a Grid. We describe a prototype application which serves fusion data over the internet together with metadata information which can be flexibly created and modified over time. The application interfaces with the MDSplus data acquisition system and it has been designed to capture metadata which is generated by scientists from the post-processing of experimental data. The implementation of dynamic metadata tables using the Java programming language together with an object-relational mapping system, Hibernate, is described in the Appendix

  8. Technologies for metadata management in scientific a

    OpenAIRE

    Castro-Romero, Alexander; González-Sanabria, Juan S.; Ballesteros-Ricaurte, Javier A.

    2015-01-01

    The use of Semantic Web technologies has been increasing, so it is common using them in different ways. This article evaluates how these technologies can contribute to improve the indexing in articles in scientific journals. Initially, there is a conceptual review about metadata. Later, studying the most important technologies for the use of metadata in Web and, this way, choosing one of them to apply it in the case of study of scientific articles indexing, in order to determine the metadata ...

  9. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  10. The role of metadata in managing large environmental science datasets. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Melton, R.B.; DeVaney, D.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States); French, J. C. [Univ. of Virginia, (United States)

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  11. The fluidities of digital learning environments and resources

    DEFF Research Database (Denmark)

    Hansbøl, Mikala

    2012-01-01

    The research project “Educational cultures and serious games on a global market place” (2009-2011) dealt with the challenge of the digital learning environment and hence it’s educational development space always existing outside the present space and hence scope of activities. With a reference...... and establishments of the virtual universe called Mingoville.com, the research shows a need to include in researchers’ conceptualizations of digital learning environments and resources, their shifting materialities and platformations and hence emerging (often unpredictable) agencies and educational development...... spaces. Keywords: Fluidity, digital learning environment, digital learning resource, educational development space...

  12. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  13. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  14. VT County National Resources Inventory Data 1982-1997

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) This collection provides tabular USDA - Natural Resource Conservation Service (NRCS), National Resources Inventory (NRI) data (1982-1997), by...

  15. openPDS: protecting the privacy of metadata through SafeAnswers.

    Directory of Open Access Journals (Sweden)

    Yves-Alexandre de Montjoye

    Full Text Available The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1 we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2 we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  16. CINERGI: Community Inventory of EarthCube Resources for Geoscience Interoperability

    Science.gov (United States)

    Zaslavsky, Ilya; Bermudez, Luis; Grethe, Jeffrey; Gupta, Amarnath; Hsu, Leslie; Lehnert, Kerstin; Malik, Tanu; Richard, Stephen; Valentine, David; Whitenack, Thomas

    2014-05-01

    Organizing geoscience data resources to support cross-disciplinary data discovery, interpretation, analysis and integration is challenging because of different information models, semantic frameworks, metadata profiles, catalogs, and services used in different geoscience domains, not to mention different research paradigms and methodologies. The central goal of CINERGI, a new project supported by the US National Science Foundation through its EarthCube Building Blocks program, is to create a methodology and assemble a large inventory of high-quality information resources capable of supporting data discovery needs of researchers in a wide range of geoscience domains. The key characteristics of the inventory are: 1) collaboration with and integration of metadata resources from a number of large data facilities; 2) reliance on international metadata and catalog service standards; 3) assessment of resource "interoperability-readiness"; 4) ability to cross-link and navigate data resources, projects, models, researcher directories, publications, usage information, etc.; 5) efficient inclusion of "long-tail" data, which are not appearing in existing domain repositories; 6) data registration at feature level where appropriate, in addition to common dataset-level registration, and 7) integration with parallel EarthCube efforts, in particular focused on EarthCube governance, information brokering, service-oriented architecture design and management of semantic information. We discuss challenges associated with accomplishing CINERGI goals, including defining the inventory scope; managing different granularity levels of resource registration; interaction with search systems of domain repositories; explicating domain semantics; metadata brokering, harvesting and pruning; managing provenance of the harvested metadata; and cross-linking resources based on the linked open data (LOD) approaches. At the higher level of the inventory, we register domain-wide resources such as domain

  17. Handling Metadata in a Neurophysiology Laboratory

    Directory of Open Access Journals (Sweden)

    Lyuba Zehl

    2016-07-01

    Full Text Available To date, non-reproducibility of neurophysiological research is a matterof intense discussion in the scientific community. A crucial componentto enhance reproducibility is to comprehensively collect and storemetadata, that is all information about the experiment, the data,and the applied preprocessing steps on the data, such that they canbe accessed and shared in a consistent and simple manner. However,the complexity of experiments, the highly specialized analysis workflowsand a lack of knowledge on how to make use of supporting softwaretools often overburden researchers to perform such a detailed documentation.For this reason, the collected metadata are often incomplete, incomprehensiblefor outsiders or ambiguous. Based on our research experience in dealingwith diverse datasets, we here provide conceptual and technical guidanceto overcome the challenges associated with the collection, organization,and storage of metadata in a neurophysiology laboratory. Through theconcrete example of managing the metadata of a complex experimentthat yields multi-channel recordings from monkeys performing a behavioralmotor task, we practically demonstrate the implementation of theseapproaches and solutions with the intention that they may be generalizedto a specific project at hand. Moreover, we detail five use casesthat demonstrate the resulting benefits of constructing a well-organizedmetadata collection when processing or analyzing the recorded data,in particular when these are shared between laboratories in a modernscientific collaboration. Finally, we suggest an adaptable workflowto accumulate, structure and store metadata from different sourcesusing, by way of example, the odML metadata framework.

  18. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  19. NAIP National Metadata

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — The NAIP National Metadata Map contains USGS Quarter Quad and NAIP Seamline boundaries for every year NAIP imagery has been collected. Clicking on the map also makes...

  20. The gap between medical faculty's perceptions and use of e-learning resources.

    Science.gov (United States)

    Kim, Kyong-Jee; Kang, Youngjoon; Kim, Giwoon

    2017-01-01

    e-Learning resources have become increasingly popular in medical education; however, there has been scant research on faculty perceptions and use of these resources. To investigate medical faculty's use of e-learning resources and to draw on practical implications for fostering their use of such resources. Approximately 500 full-time faculty members in 35 medical schools across the nation in South Korea were invited to participate in a 30-item questionnaire on their perceptions and use of e-learning resources in medical education. The questionnaires were distributed in both online and paper formats. Descriptive analysis and reliability analysis were conducted of the data. Eighty faculty members from 28 medical schools returned the questionnaires. Twenty-two percent of respondents were female and 78% were male, and their rank, disciplines, and years of teaching experience all varied. Participants had positive perceptions of e-learning resources in terms of usefulness for student learning and usability; still, only 39% of them incorporated those resources in their teaching. The most frequently selected reasons for not using e-learning resources in their teaching were 'lack of resources relevant to my lectures,' 'lack of time to use them during lectures,' and 'was not aware of their availability.' Our study indicates a gap between medical faculty's positive perceptions of e-learning resources and their low use of such resources. Our findings highlight the needs for further study of individual and institutional barriers to faculty adoption of e-learning resources to bridge this gap.

  1. An emergent theory of digital library metadata enrich then filter

    CERN Document Server

    Stevens, Brett

    2015-01-01

    An Emergent Theory of Digital Library Metadata is a reaction to the current digital library landscape that is being challenged with growing online collections and changing user expectations. The theory provides the conceptual underpinnings for a new approach which moves away from expert defined standardised metadata to a user driven approach with users as metadata co-creators. Moving away from definitive, authoritative, metadata to a system that reflects the diversity of users’ terminologies, it changes the current focus on metadata simplicity and efficiency to one of metadata enriching, which is a continuous and evolving process of data linking. From predefined description to information conceptualised, contextualised and filtered at the point of delivery. By presenting this shift, this book provides a coherent structure in which future technological developments can be considered.

  2. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  3. Discovery and Use of Online Learning Resources: Case Study Findings

    OpenAIRE

    Laurie Miller Nelson; James Dorward; Mimi M. Recker

    2004-01-01

    Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called learning objects. Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article describes a case study of mathematics and science teachers practices and desires surrounding the discovery, selection, and use of digital library resources for...

  4. Discovery and Use of Online Learning Resources: Case Study Findings

    Science.gov (United States)

    Recker, Mimi M.; Dorward, James; Nelson, Laurie Miller

    2004-01-01

    Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called "learning objects." Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article…

  5. SPASE, Metadata, and the Heliophysics Virtual Observatories

    Science.gov (United States)

    Thieman, James; King, Todd; Roberts, Aaron

    2010-01-01

    To provide data search and access capability in the field of Heliophysics (the study of the Sun and its effects on the Solar System, especially the Earth) a number of Virtual Observatories (VO) have been established both via direct funding from the U.S. National Aeronautics and Space Administration (NASA) and through other funding agencies in the U.S. and worldwide. At least 15 systems can be labeled as Virtual Observatories in the Heliophysics community, 9 of them funded by NASA. The problem is that different metadata and data search approaches are used by these VO's and a search for data relevant to a particular research question can involve consulting with multiple VO's - needing to learn a different approach for finding and acquiring data for each. The Space Physics Archive Search and Extract (SPASE) project is intended to provide a common data model for Heliophysics data and therefore a common set of metadata for searches of the VO's. The SPASE Data Model has been developed through the common efforts of the Heliophysics Data and Model Consortium (HDMC) representatives over a number of years. We currently have released Version 2.1 of the Data Model. The advantages and disadvantages of the Data Model will be discussed along with the plans for the future. Recent changes requested by new members of the SPASE community indicate some of the directions for further development.

  6. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    Science.gov (United States)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  7. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  8. Interpreting the ASTM 'content standard for digital geospatial metadata'

    Science.gov (United States)

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  9. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators......This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  10. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  11. Leveraging Metadata to Create Better Web Services

    Science.gov (United States)

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  12. eLearning resources to supplement postgraduate neurosurgery training.

    OpenAIRE

    Stienen, MN; Schaller, K; Cock, H; Lisnic, V; Regli, L; Thomson, S

    2017-01-01

    BACKGROUND: In an increasingly complex and competitive professional environment, improving methods to educate neurosurgical residents is key to ensure high-quality patient care. Electronic (e)Learning resources promise interactive knowledge acquisition. We set out to give a comprehensive overview on available eLearning resources that aim to improve postgraduate neurosurgical training and review the available literature. MATERIAL AND METHODS: A MEDLINE query was performed, using the search ter...

  13. MEAT: An Authoring Tool for Generating Adaptable Learning Resources

    Science.gov (United States)

    Kuo, Yen-Hung; Huang, Yueh-Min

    2009-01-01

    Mobile learning (m-learning) is a new trend in the e-learning field. The learning services in m-learning environments are supported by fundamental functions, especially the content and assessment services, which need an authoring tool to rapidly generate adaptable learning resources. To fulfill the imperious demand, this study proposes an…

  14. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities’ libraries

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Introduction: Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries’ Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. Materials and Methods: The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. Results: The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). Conclusion: It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers

  15. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities' libraries.

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries' Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources

  16. Measuring learning gain: Comparing anatomy drawing screencasts and paper-based resources.

    Science.gov (United States)

    Pickering, James D

    2017-07-01

    The use of technology-enhanced learning (TEL) resources is now a common tool across a variety of healthcare programs. Despite this popular approach to curriculum delivery there remains a paucity in empirical evidence that quantifies the change in learning gain. The aim of the study was to measure the changes in learning gain observed with anatomy drawing screencasts in comparison to a traditional paper-based resource. Learning gain is a widely used term to describe the tangible changes in learning outcomes that have been achieved after a specific intervention. In regard to this study, a cohort of Year 2 medical students voluntarily participated and were randomly assigned to either a screencast or textbook group to compare changes in learning gain across resource type. Using a pre-test/post-test protocol, and a range of statistical analyses, the learning gain was calculated at three test points: immediate post-test, 1-week post-test and 4-week post-test. Results at all test points revealed a significant increase in learning gain and large effect sizes for the screencast group compared to the textbook group. Possible reasons behind the difference in learning gain are explored by comparing the instructional design of both resources. Strengths and weaknesses of the study design are also considered. This work adds to the growing area of research that supports the effective design of TEL resources which are complimentary to the cognitive theory of multimedia learning to achieve both an effective and efficient learning resource for anatomical education. Anat Sci Educ 10: 307-316. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  17. A Generic Metadata Editor Supporting System Using Drupal CMS

    Science.gov (United States)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  18. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  19. Metadata Design in the New PDS4 Standards - Something for Everybody

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  20. Big Data X-Learning Resources Integration and Processing in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kong Xiangsheng

    2014-09-01

    Full Text Available The cloud computing platform has good flexibility characteristics, more and more learning systems are migrated to the cloud platform. Firstly, this paper describes different types of educational environments and the data they provide. Then, it proposes a kind of heterogeneous learning resources mining, integration and processing architecture. In order to integrate and process the different types of learning resources in different educational environments, this paper specifically proposes a novel solution and massive storage integration algorithm and conversion algorithm to the heterogeneous learning resources storage and management cloud environments.

  1. Discovery and Use of Online Learning Resources: Case Study Findings

    Directory of Open Access Journals (Sweden)

    Laurie Miller Nelson

    2004-04-01

    Full Text Available Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called ‘learning objects.’ Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article describes a case study of mathematics and science teachers’ practices and desires surrounding the discovery, selection, and use of digital library resources for instructional purposes. Findings suggest that the teacher participants used a broad range of search strategies in order to find resources that they deemed were age-appropriate, current, and accurate. They intended to include these resources with little modifications into planned instructional activities. The article concludes with a discussion of the implications of the findings for improving the design of educational digital library systems, including tools supporting resource reuse.

  2. Taxonomic names, metadata, and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Roderic D. M. Page

    2006-01-01

    Full Text Available Life Science Identifiers (LSIDs offer an attractive solution to the problem of globally unique identifiers for digital objects in biology. However, I suggest that in the context of taxonomic names, the most compelling benefit of adopting these identifiers comes from the metadata associated with each LSID. By using existing vocabularies wherever possible, and using a simple vocabulary for taxonomy-specific concepts we can quickly capture the essential information about a taxonomic name in the Resource Description Framework (RDF format. This opens up the prospect of using technologies developed for the Semantic Web to add ``taxonomic intelligence" to biodiversity databases. This essay explores some of these ideas in the context of providing a taxonomic framework for the phylogenetic database TreeBASE.

  3. Resource Guide for Persons with Learning Impairments.

    Science.gov (United States)

    IBM, Atlanta, GA. National Support Center for Persons with Disabilities.

    The resource guide identifies products which assist learning disabled and mentally retarded individuals in accessing IBM (International Business Machine) Personal Computers or the IBM Personal System/2 family of products. An introduction provides a general overview of ways computers can help learning disabled or retarded persons. The document then…

  4. The Energy Industry Profile of ISO/DIS 19115-1: Facilitating Discovery and Evaluation of, and Access to Distributed Information Resources

    Science.gov (United States)

    Hills, S. J.; Richard, S. M.; Doniger, A.; Danko, D. M.; Derenthal, L.; Energistics Metadata Work Group

    2011-12-01

    A diverse group of organizations representative of the international community involved in disciplines relevant to the upstream petroleum industry, - energy companies, - suppliers and publishers of information to the energy industry, - vendors of software applications used by the industry, - partner government and academic organizations, has engaged in the Energy Industry Metadata Standards Initiative. This Initiative envisions the use of standard metadata within the community to enable significant improvements in the efficiency with which users discover, evaluate, and access distributed information resources. The metadata standard needed to realize this vision is the initiative's primary deliverable. In addition to developing the metadata standard, the initiative is promoting its adoption to accelerate realization of the vision, and publishing metadata exemplars conformant with the standard. Implementation of the standard by community members, in the form of published metadata which document the information resources each organization manages, will allow use of tools requiring consistent metadata for efficient discovery and evaluation of, and access to, information resources. While metadata are expected to be widely accessible, access to associated information resources may be more constrained. The initiative is being conducting by Energistics' Metadata Work Group, in collaboration with the USGIN Project. Energistics is a global standards group in the oil and natural gas industry. The Work Group determined early in the initiative, based on input solicited from 40+ organizations and on an assessment of existing metadata standards, to develop the target metadata standard as a profile of a revised version of ISO 19115, formally the "Energy Industry Profile of ISO/DIS 19115-1 v1.0" (EIP). The Work Group is participating on the ISO/TC 211 project team responsible for the revision of ISO 19115, now ready for "Draft International Standard" (DIS) status. With ISO 19115 an

  5. Learning foreign languages in teletandem: Resources and strategies

    Directory of Open Access Journals (Sweden)

    João A. TELLES

    2015-12-01

    Full Text Available ABSTRACT Teletandem is a virtual, collaborative, and autonomous context in which two speakers of different languages use the text, voice, and webcam image resources of VOIP technology (Skype to help each other learn their native language (or language of proficiency. This paper focuses on learners' studying processes and their responses to teletandem. We collected quantitative and qualitative data from 134 university students through an online questionnaire. Results show the content of students' learning processes, resources, activities, and strategies. We conclude with a critical discussion of the results and raise pedagogical implications for the use o-f teletandem as a mode of online intercultural contact to learn foreign languages.

  6. VT Mineral Resources - MRDS Extract

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) MRDSVT is an extract from the Mineral Resources Data System (MRDS) covering the State of Vermont only. MRDS database contains the records provided...

  7. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  8. Learning Networks: connecting people, organizations, autonomous agents and learning resources to establish the emergence of effective lifelong learning

    NARCIS (Netherlands)

    Koper, Rob; Sloep, Peter

    2003-01-01

    Koper, E.J.R., Sloep, P.B. (2002) Learning Networks connecting people, organizations, autonomous agents and learning resources to establish the emergence of effective lifelong learning. RTD Programma into Learning Technologies 2003-2008. More is different… Heerlen, Nederland: Open Universiteit

  9. Metadata Creation, Management and Search System for your Scientific Data

    Science.gov (United States)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  10. Survey data and metadata modelling using document-oriented NoSQL

    Science.gov (United States)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  11. Using Metadata to Build Geographic Information Sharing Environment on Internet

    Directory of Open Access Journals (Sweden)

    Chih-hong Sun

    1999-12-01

    Full Text Available Internet provides a convenient environment to share geographic information. Web GIS (Geographic Information System even provides users a direct access environment to geographic databases through Internet. However, the complexity of geographic data makes it difficult for users to understand the real content and the limitation of geographic information. In some cases, users may misuse the geographic data and make wrong decisions. Meanwhile, geographic data are distributed across various government agencies, academic institutes, and private organizations, which make it even more difficult for users to fully understand the content of these complex data. To overcome these difficulties, this research uses metadata as a guiding mechanism for users to fully understand the content and the limitation of geographic data. We introduce three metadata standards commonly used for geographic data and metadata authoring tools available in the US. We also review the current development of geographic metadata standard in Taiwan. Two metadata authoring tools are developed in this research, which will enable users to build their own geographic metadata easily.[Article content in Chinese

  12. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  13. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    Science.gov (United States)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  14. EPA Metadata Style Guide Keywords and EPA Organization Names

    Science.gov (United States)

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  15. Mining dark information resources to develop new informatics capabilities to support science

    Science.gov (United States)

    Ramachandran, Rahul; Maskey, Manil; Bugbee, Kaylin

    2016-04-01

    Dark information resources are digital resources that organizations collect, process, and store for regular business or operational activities but fail to realize their potential for other purposes. The challenge for any organization is to recognize, identify and effectively exploit these dark information stores. Metadata catalogs at different data centers store dark information resources consisting of structured information, free form descriptions of data and browse images. These information resources are never fully exploited beyond a few fields used for search and discovery. For example, the NASA Earth science catalog holds greater than 6000 data collections, 127 million records for individual files and 67 million browse images. We believe that the information contained in the metadata catalogs and the browse images can be utilized beyond their original design intent to provide new data discovery and exploration pathways to support science and education communities. In this paper we present two research applications using information stored in the metadata catalog in a completely novel way. The first application is designing a data curation service. The objective of the data curation service is to augment the existing data search capabilities. Given a specific atmospheric phenomenon, the data curation service returns the user a ranked list of relevant data sets. Different fields in the metadata records including textual descriptions are mined. A specialized relevancy ranking algorithm has been developed that uses a "bag of words" to define phenomena along with an ensemble of known approaches such as the Jaccard Coefficient, Cosine Similarity and Zone ranking to rank the data sets. This approach is also extended to map from the data set level to data file variable level. The second application is focused on providing a service where a user can search and discover browse images containing specific phenomena from the vast catalog. This service will aid researchers

  16. Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation

    OpenAIRE

    Chen, Tianyi; Ling, Qing; Giannakis, Georgios B.

    2017-01-01

    Network resource allocation shows revived popularity in the era of data deluge and information explosion. Existing stochastic optimization approaches fall short in attaining a desirable cost-delay tradeoff. Recognizing the central role of Lagrange multipliers in network resource allocation, a novel learn-and-adapt stochastic dual gradient (LA-SDG) method is developed in this paper to learn the sample-optimal Lagrange multiplier from historical data, and accordingly adapt the upcoming resource...

  17. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  18. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  19. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  20. Learning Resources and MOOCs

    DEFF Research Database (Denmark)

    Christiansen, René Boyer

    MOOCs (Massive Open Online Courses) have become a serious player within the field of education and learning in the past few years. MOOC research is thus a new field but within the last 2-3 years, it has developed rapidly (Liyanagunawardena et al., 2013, Bayne & Ross, 2014). Much of this research...... has had an emphasis on learners and outcome as well as suitable business models. And even though the internet merely flows over with lists of MOOCs to attend (such as the list from “Top 5 onlinecolleges” which features a list of 99 MOOC environments) not much emphasis has been brought on the actual...... construction of learning resources within all these MOOCs – and what demands they lay on teachers competences and teachers skills....

  1. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    Science.gov (United States)

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  2. Enriching The Metadata On CDS

    CERN Document Server

    Chhibber, Nalin

    2014-01-01

    The project report revolves around the open source software package called Invenio. It provides the tools for management of digital assets in a repository and drives CERN Document Server. Primary objective is to enhance the existing metadata in CDS with data from other libraries. An implicit part of this task is to manage disambiguation (within incoming data), removal of multiple entries and handle replications between new and existing records. All such elements and their corresponding changes are integrated within Invenio to make the upgraded metadata available on the CDS. Latter part of the report discuss some changes related to the Invenio code-base itself.

  3. Selection and Use of Online Learning Resources by First-Year Medical Students: Cross-Sectional Study.

    Science.gov (United States)

    Judd, Terry; Elliott, Kristine

    2017-10-02

    Medical students have access to a wide range of learning resources, many of which have been specifically developed for or identified and recommended to them by curriculum developers or teaching staff. There is an expectation that students will access and use these resources to support their self-directed learning. However, medical educators lack detailed and reliable data about which of these resources students use to support their learning and how this use relates to key learning events or activities. The purpose of this study was to comprehensively document first-year medical student selection and use of online learning resources to support their bioscience learning within a case-based curriculum and assess these data in relation to our expectations of student learning resource requirements and use. Study data were drawn from 2 sources: a survey of student learning resource selection and use (2013 cohort; n=326) and access logs from the medical school learning platform (2012 cohort; n=337). The paper-based survey, which was distributed to all first-year students, was designed to assess the frequency and types of online learning resources accessed by students and included items about their perceptions of the usefulness, quality, and reliability of various resource types and sources. Of 237 surveys returned, 118 complete responses were analyzed (36.2% response rate). Usage logs from the learning platform for an entire semester were processed to provide estimates of first-year student resource use on an individual and cohort-wide basis according to method of access, resource type, and learning event. According to the survey data, students accessed learning resources via the learning platform several times per week on average, slightly more often than they did for resources from other online sources. Google and Wikipedia were the most frequently used nonuniversity sites, while scholarly information sites (eg, online journals and scholarly databases) were accessed

  4. An assessment of student experiences and learning based on a novel undergraduate e-learning resource.

    Science.gov (United States)

    Mehta, S; Clarke, F; Fleming, P S

    2016-08-12

    Purpose/objectives The aims of this study were to describe the development of a novel e-learning resource and to assess its impact on student learning experiences and orthodontic knowledge.Methods Thirty-two 4th year dental undergraduate students at Queen Mary University of London were randomly allocated to receive electronic access to e-learning material covering various undergraduate orthodontic topics over a 6-week period. Thirty-one control students were not given access during the study period. All students were asked to complete electronic quizzes both before (T0) and after (T1) the study period and a general questionnaire concerning familiarity with e-learning. The test group also completed a user satisfaction questionnaire at T1. Two focus groups were also undertaken to explore learners' experiences and suggestions in relation to the resource.Results The mean quiz result improved by 3.9% and 4.5% in the control and test groups, respectively. An independent t-test, however, demonstrated a lack of statistical significance in knowledge gain between control and test groups (P = 0.941). The qualitative feedback indicated that students believed that use of the resource enhanced knowledge and basic understanding with students expressing a wish to ingrain similar resources in other areas of undergraduate teaching.Conclusions Use of the novel orthodontic e-resource by 4th year undergraduate students over a 6-week period did not result in a significant improvement in subject knowledge. However, the e-learning has proven popular among undergraduates and the resources will continue to be refined.

  5. Stressors, academic performance, and learned resourcefulness in baccalaureate nursing students.

    Science.gov (United States)

    Goff, Anne-Marie

    2011-01-01

    High stress levels in nursing students may affect memory, concentration, and problem-solving ability, and may lead to decreased learning, coping, academic performance, and retention. College students with higher levels of learned resourcefulness develop greater self-confidence, motivation, and academic persistence, and are less likely to become anxious, depressed, and frustrated, but no studies specifically involve nursing students. This explanatory correlational study used Gadzella's Student-life Stress Inventory (SSI) and Rosenbaum's Self Control Scale (SCS) to explore learned resourcefulness, stressors, and academic performance in 53 baccalaureate nursing students. High levels of personal and academic stressors were evident, but not significant predictors of academic performance (p = .90). Age was a significant predictor of academic performance (p = learned resourcefulness scores than females and Caucasians. Studies in larger, more diverse samples are necessary to validate these findings.

  6. DEVELOPMENT OF A METADATA MANAGEMENT SYSTEM FOR AN INTERDISCIPLINARY RESEARCH PROJECT

    Directory of Open Access Journals (Sweden)

    C. Curdt

    2012-07-01

    Full Text Available In every interdisciplinary, long-term research project it is essential to manage and archive all heterogeneous research data, produced by the project participants during the project funding. This has to include sustainable storage, description with metadata, easy and secure provision, back up, and visualisation of all data. To ensure the accurate description of all project data with corresponding metadata, the design and implementation of a metadata management system is a significant duty. Thus, the sustainable use and search of all research results during and after the end of the project is particularly dependent on the implementation of a metadata management system. Therefore, this paper will describe the practical experiences gained during the development of a scientific research data management system (called the TR32DB including the corresponding metadata management system for the multidisciplinary research project Transregional Collaborative Research Centre 32 (CRC/TR32 'Patterns in Soil-Vegetation-Atmosphere Systems'. The entire system was developed according to the requirements of the funding agency, the user and project requirements, as well as according to recent standards and principles. The TR32DB is basically a combination of data storage, database, and web-interface. The metadata management system was designed, realized, and implemented to describe and access all project data via accurate metadata. Since the quantity and sort of descriptive metadata depends on the kind of data, a user-friendly multi-level approach was chosen to cover these requirements. Thus, the self-developed CRC/TR32 metadata framework is designed. It is a combination of general, CRC/TR32 specific, as well as data type specific properties.

  7. Metadata Exporter for Scientific Photography Management

    Science.gov (United States)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  8. GeoBoost: accelerating research involving the geospatial metadata of virus GenBank records.

    Science.gov (United States)

    Tahsin, Tasnia; Weissenbacher, Davy; O'Connor, Karen; Magge, Arjun; Scotch, Matthew; Gonzalez-Hernandez, Graciela

    2018-05-01

    GeoBoost is a command-line software package developed to address sparse or incomplete metadata in GenBank sequence records that relate to the location of the infected host (LOIH) of viruses. Given a set of GenBank accession numbers corresponding to virus GenBank records, GeoBoost extracts, integrates and normalizes geographic information reflecting the LOIH of the viruses using integrated information from GenBank metadata and related full-text publications. In addition, to facilitate probabilistic geospatial modeling, GeoBoost assigns probability scores for each possible LOIH. Binaries and resources required for running GeoBoost are packed into a single zipped file and freely available for download at https://tinyurl.com/geoboost. A video tutorial is included to help users quickly and easily install and run the software. The software is implemented in Java 1.8, and supported on MS Windows and Linux platforms. gragon@upenn.edu. Supplementary data are available at Bioinformatics online.

  9. Languages for Metadata

    NARCIS (Netherlands)

    Brussee, R.; Veenstra, M.; Blanken, Henk; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    The term meta origins from the Greek word µ∈τα, meaning after. The word Metaphysics is the title of Aristotle’s book coming after his book on nature called Physics. This has given meta the modern connotation of a nature of a higher order or of a more fundamental kind [1]. Literally, metadata is

  10. An Assessment of the Evolving Common Metadata Repository Standards for Airborne Field Campaigns

    Science.gov (United States)

    Northup, E. A.; Chen, G.; Early, A. B.; Beach, A. L., III; Walter, J.; Conover, H.

    2016-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. While the current data management practices demonstrate some success in serving airborne science team data user needs, existing metadata models and standards such as NASA's Unified Metadata Model (UMM) for Collections (UMM-C) present challenges with respect to accommodating certain features of airborne science metadata. UMM is the model implemented in the Common Metadata Repository (CMR), which catalogs all metadata records for NASA's Earth Observing System Data and Information System (EOSDIS). One example of these challenges is with representation of spatial and temporal metadata. In addition, many airborne missions target a particular geophysical event, such as a developing hurricane. In such cases, metadata about the event is also important for understanding the data. While coverage of satellite missions is highly predictable based on orbit characteristics, airborne missions feature complicated flight patterns where measurements can be spatially and temporally discontinuous. Therefore, existing metadata models will need to be expanded for airborne measurements and sampling strategies. An Airborne Metadata Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to identify specific features of airborne metadata that can not be currently represented in the UMM and to develop new recommendations. The group includes representation from airborne data users and providers. This presentation will discuss the challenges and recommendations in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest and discoverability to a broader user community.

  11. Personalized Resource Recommendations using Learning from Positive and Unlabeled Examples

    Directory of Open Access Journals (Sweden)

    Priyank Thakkar

    2016-08-01

    Full Text Available This paper proposes a novel approach for recommending social resources using learning from positive and unlabeled examples. Bookmarks submitted on social bookmarking system delicious1 and artists on online music system last.fm2 are considered as social resources. The foremost feature of this problem is that there are no labeled negative resources/examples available for learning a recommender/classifier. The memory based collaborative filtering has served as the most widely used algorithm for social resource recommendation. However, its predictions are based on some ad hoc heuristic rules and its success depends on the availability of a critical mass of users. This paper proposes model based two-step techniques to learn a classifier using positive and unlabeled examples to address personalized resource recommendations. In the first step of these techniques, naïve Bayes classifier is employed to identify reliable negative resources. In the second step, to generate effective resource recommender, classification and regression tree and least square support vector machine (LS-SVM are exercised. A direct method based on LS-SVM is also put forward to realize the recommendation task. LS-SVM is customized for learning from positive and unlabeled data. Furthermore, the impact of feature selection on our proposed techniques is also studied. Memory based collaborative filtering as well as our proposed techniques exploit usage data to generate personalized recommendations. Experimental results show that the proposed techniques outperform existing method appreciably.

  12. Metadata capture in an electronic notebook: How to make it as simple as possible?

    Directory of Open Access Journals (Sweden)

    Menzel, Julia

    2015-09-01

    Full Text Available In the last few years electronic laboratory notebooks (ELNs have become popular. ELNs offer the great possibility to capture metadata automatically. Due to the high documentation effort metadata documentation is neglected in science. To close the gap between good data documentation and high documentation effort for the scientists a first user-friendly solution to capture metadata in an easy way was developed.At first, different protocols for the Western Blot were collected within the Collaborative Research Center 1002 and analyzed. Together with existing metadata standards identified in a literature search a first version of the metadata scheme was developed. Secondly, the metadata scheme was customized for future users including the implementation of default values for automated metadata documentation.Twelve protocols for the Western Blot were used to construct one standard protocol with ten different experimental steps. Three already existing metadata standards were used as models to construct the first version of the metadata scheme consisting of 133 data fields in ten experimental steps. Through a revision with future users the final metadata scheme was shortened to 90 items in three experimental steps. Using individualized default values 51.1% of the metadata can be captured with present values in the ELN.This lowers the data documentation effort. At the same time, researcher could benefit by providing standardized metadata for data sharing and re-use.

  13. A Meta-Relational Approach for the Definition and Management of Hybrid Learning Objects

    Science.gov (United States)

    Navarro, Antonio; Fernandez-Pampillon, Ana Ma.; Fernandez-Chamizo, Carmen; Fernandez-Valmayor, Alfredo

    2013-01-01

    Electronic learning objects (LOs) are commonly conceived of as digital units of information used for teaching and learning. To facilitate their classification for pedagogical planning and retrieval purposes, LOs are complemented with metadata (e.g., the author). These metadata are usually restricted by a set of predetermined tags to which the…

  14. Learning Resource Centre in a Company

    Directory of Open Access Journals (Sweden)

    Aleksander Pokovec

    2001-12-01

    Full Text Available In the conditions of growing competition people are becoming the essential competitive advantage. Because of too much work, stress, too many responsibilities and other factors, employees are often unmotivated. Everyday self-study is the best way that leads to excellence. In order to enable self-study for all employees, the organisation should organise their own learning resource centre that includes: educational videoprogrammes, audio tapes, books and e-learning programmes. All educational programmes should cover business and personal topics.

  15. CHARMe Commentary metadata for Climate Science: collecting, linking and sharing user feedback on climate datasets

    Science.gov (United States)

    Blower, Jon; Lawrence, Bryan; Kershaw, Philip; Nagni, Maurizio

    2014-05-01

    The research process can be thought of as an iterative activity, initiated based on prior domain knowledge, as well on a number of external inputs, and producing a range of outputs including datasets, studies and peer reviewed publications. These outputs may describe the problem under study, the methodology used, the results obtained, etc. In any new publication, the author may cite or comment other papers or datasets in order to support their research hypothesis. However, as their work progresses, the researcher may draw from many other latent channels of information. These could include for example, a private conversation following a lecture or during a social dinner; an opinion expressed concerning some significant event such as an earthquake or for example a satellite failure. In addition, other sources of information of grey literature are important public such as informal papers such as the arxiv deposit, reports and studies. The climate science community is not an exception to this pattern; the CHARMe project, funded under the European FP7 framework, is developing an online system for collecting and sharing user feedback on climate datasets. This is to help users judge how suitable such climate data are for an intended application. The user feedback could be comments about assessments, citations, or provenance of the dataset, or other information such as descriptions of uncertainty or data quality. We define this as a distinct category of metadata called Commentary or C-metadata. We link C-metadata with target climate datasets using a Linked Data approach via the Open Annotation data model. In the context of Linked Data, C-metadata plays the role of a resource which, depending on its nature, may be accessed as simple text or as more structured content. The project is implementing a range of software tools to create, search or visualize C-metadata including a JavaScript plugin enabling this functionality to be integrated in situ with data provider portals

  16. Describing Geospatial Assets in the Web of Data: A Metadata Management Scenario

    Directory of Open Access Journals (Sweden)

    Cristiano Fugazza

    2016-12-01

    Full Text Available Metadata management is an essential enabling factor for geospatial assets because discovery, retrieval, and actual usage of the latter are tightly bound to the quality of these descriptions. Unfortunately, the multi-faceted landscape of metadata formats, requirements, and conventions makes it difficult to identify editing tools that can be easily tailored to the specificities of a given project, workgroup, and Community of Practice. Our solution is a template-driven metadata editing tool that can be customised to any XML-based schema. Its output is constituted by standards-compliant metadata records that also have a semantics-aware counterpart eliciting novel exploitation techniques. Moreover, external data sources can easily be plugged in to provide autocompletion functionalities on the basis of the data structures made available on the Web of Data. Beside presenting the essentials on customisation of the editor by means of two use cases, we extend the methodology to the whole life cycle of geospatial metadata. We demonstrate the novel capabilities enabled by RDF-based metadata representation with respect to traditional metadata management in the geospatial domain.

  17. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    Science.gov (United States)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  18. Using a Metro Map Metaphor for organizing Web-based learning resources

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Bang, Tove; Hansen, Per Steen

    2002-01-01

    This paper briefly describes the WebNize system and how it applies a Metro Map metaphor for organizing guided tours in Web based resources. Then, experiences in using the Metro Map based tours in a Knowledge Sharing project at the library at Aarhus School of Business (ASB) in Denmark, are discussed...... is to create models for Intelligent Knowledge Solutions that can contribute to form the learning environments of the School in the 21st century. The WebNize system is used for sharing of knowledge through metro maps for specific subject areas made available in the Learning Resource Centre at ASB. The metro....... The Library has been involved in establishing a Learning Resource Center (LRC). The LRC serves as an exploratorium for the development and the testing of new forms of communication and learning, at the same time as it integrates the information resources of the electronic research library. The objective...

  19. Illuminating the Darkness: Exploiting untapped data and information resources in Earth Science

    Data.gov (United States)

    National Aeronautics and Space Administration — We contend that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or...

  20. Parenting Styles and Learned Resourcefulness of Turkish Adolescents

    Science.gov (United States)

    Turkel, Yesim Deniz; Tezer, Esin

    2008-01-01

    This study investigated the differences among 834 high school students regarding learned resourcefulness in terms of perceived parenting style and gender. The data were gathered by administering the Parenting Style Inventory (PSI) and Rosenbaum's Self-Control Schedule (SCS). The results of ANOVA pertaining to the scores of learned resourcefulness…

  1. A semantically rich and standardised approach enhancing discovery of sensor data and metadata

    Science.gov (United States)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise

    2016-04-01

    The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor

  2. Resource-based learning strategies: implications for students and institutions

    Directory of Open Access Journals (Sweden)

    Malcolm Ryan

    1996-12-01

    Full Text Available In its strategic plan, the University of Greenwich envisages a significant shift to resource-based learning (RBL. Enterprise in Higher Education (EHE has funded five pilot RBL projects during the past year, including one in introductory economics. The project was managed by three lecturers in the School of Social Sciences, supported by an Academic Development Officer. Learning outcomes were completely revised, and a range of assessment strategies, including computer-based tests, was identified. A resources guide was produced which identified the materials and activities that would enable students to achieve the learning outcomes. A number of innovations were adopted, including: • computer-based curriculum delivery, assessment, and student evaluation of the course; • an open approach to assessment; • abolishing lectures in favour of a diverse range of teaching and learning activities.

  3. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  4. A document centric metadata registration tool constructing earth environmental data infrastructure

    Science.gov (United States)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  5. Automated Metadata Extraction

    Science.gov (United States)

    2008-06-01

    Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album ...6 3. Music : MP3 and AAC .........................................................................7 4. Tagged Image File Format...Expert Group (MPEG) set of standards for music encoding. Open Document Format (ODF) – an open, license-free, and clearly documented file format

  6. Overview of long-term field experiments in Germany - metadata visualization

    Science.gov (United States)

    Muqit Zoarder, Md Abdul; Heinrich, Uwe; Svoboda, Nikolai; Grosse, Meike; Hierold, Wilfried

    2017-04-01

    BonaRes ("soil as a sustainable resource for the bioeconomy") is conducting to collect data and metadata of agricultural long-term field experiments (LTFE) of Germany. It is funded by the German Federal Ministry of Education and Research (BMBF) under the umbrella of the National Research Strategy BioEconomy 2030. BonaRes consists of ten interdisciplinary research project consortia and the 'BonaRes - Centre for Soil Research'. BonaRes Data Centre is responsible for collecting all LTFE data and regarding metadata into an enterprise database upon higher level of security and visualization of the data and metadata through data portal. In the frame of the BonaRes project, we are compiling an overview of long-term field experiments in Germany that is based on a literature review, the results of the online survey and direct contacts with LTFE operators. Information about research topic, contact person, website, experiment setup and analyzed parameters are collected. Based on the collected LTFE data, an enterprise geodatabase is developed and a GIS-based web-information system about LTFE in Germany is also settled. Various aspects of the LTFE, like experiment type, land-use type, agricultural category and duration of experiment, are presented in thematic maps. This information system is dynamically linked to the database, which means changes in the data directly affect the presentation. An easy data searching option using LTFE name, -location or -operators and the dynamic layer selection ensure a user-friendly web application. Dispersion and visualization of the overlapping LTFE points on the overview map are also challenging and we make it automatized at very zoom level which is also a consistent part of this application. The application provides both, spatial location and meta-information of LTFEs, which is backed-up by an enterprise geodatabase, GIS server for hosting map services and Java script API for web application development.

  7. Adoption of Technology and Augmentation of Resources for Teaching-Learning in Higher Education

    OpenAIRE

    P. M. Suresh Kumar

    2017-01-01

    Learner centred education through appropriate methodologies facilitates effective learning as teaching-learning modalities of higher education are considered to be relevant to the learner group. Curriculum delivery and pedagogy should incorporate multitude of learning experiences and innovative learning methodologies through adoption of technology. Plenty of resources external to the curriculum come into use, which offer valuable learning experiences. Augmentation of resources for teaching...

  8. Automating the Extraction of Metadata from Archaeological Data Using iRods Rules

    Directory of Open Access Journals (Sweden)

    David Walling

    2011-10-01

    Full Text Available The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.

  9. An evaluation of learning resources in the teaching of formal philosophical methods

    Directory of Open Access Journals (Sweden)

    Susan A.J. Stuart

    2003-12-01

    Full Text Available In any discipline, across a wide variety of subjects, there are numerous learning resources available to students. For many students the resources that will be most beneficial to them are quickly apparent but, because of the nature of philosophy and the philosophical method, it is not immediately clear which resources will be most valuable to students for whom the development of critical thinking skills is crucial. If we are to support these students effectively in their learning we must establish what these resources are how we can continue to maintain and improve them, and how we can encourage students to make good use of them. In this paper we describe and assess our evaluation of the use made by students of learning resources in the context of learning logic and in developing their critical thinking skills. We also assess the use of a new resource, electronic handsets, the purpose of which is to encourage students to respond to questions in lectures and to gain feedback about how they are progressing with the material.

  10. Metadata Quality in Institutional Repositories May be Improved by Addressing Staffing Issues

    Directory of Open Access Journals (Sweden)

    Elizabeth Stovold

    2016-09-01

    Full Text Available A Review of: Moulaison, S. H., & Dykas, F. (2016. High-quality metadata and repository staffing: Perceptions of United States–based OpenDOAR participants. Cataloging & Classification Quarterly, 54(2, 101-116. http://dx.doi.org/10.1080/01639374.2015.1116480 Objective – To investigate the quality of institutional repository metadata, metadata practices, and identify barriers to quality. Design – Survey questionnaire. Setting – The OpenDOAR online registry of worldwide repositories. Subjects – A random sample of 50 from 358 administrators of institutional repositories in the United States of America listed in the OpenDOAR registry. Methods – The authors surveyed a random sample of administrators of American institutional repositories included in the OpenDOAR registry. The survey was distributed electronically. Recipients were asked to forward the email if they felt someone else was better suited to respond. There were questions about the demographics of the repository, the metadata creation environment, metadata quality, standards and practices, and obstacles to quality. Results were analyzed in Excel, and qualitative responses were coded by two researchers together. Main results – There was a 42% (n=21 response rate to the section on metadata quality, a 40% (n=20 response rate to the metadata creation section, and 40% (n=20 to the section on obstacles to quality. The majority of respondents rated their metadata quality as average (65%, n=13 or above average (30%, n=5. No one rated the quality as high or poor, while 10% (n=2 rated the quality as below average. The survey found that the majority of descriptive metadata was created by professional (84%, n=16 or paraprofessional (53%, n=10 library staff. Professional staff were commonly involved in creating administrative metadata, reviewing the metadata, and selecting standards and documentation. Department heads and advisory committees were also involved in standards and documentation

  11. Studies of Big Data metadata segmentation between relational and non-relational databases

    Science.gov (United States)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  12. Studies of Big Data metadata segmentation between relational and non-relational databases

    CERN Document Server

    Golosova, M V; Klimentov, A A; Ryabinkin, E A; Dimitrov, G; Potekhin, M

    2015-01-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  13. Strategic Resource Use for Learning: A Self-Administered Intervention That Guides Self-Reflection on Effective Resource Use Enhances Academic Performance.

    Science.gov (United States)

    Chen, Patricia; Chavez, Omar; Ong, Desmond C; Gunderson, Brenda

    2017-06-01

    Many educational policies provide learners with more resources (e.g., new learning activities, study materials, or technologies), but less often do they address whether students are using these resources effectively. We hypothesized that making students more self-reflective about how they should approach their learning with the resources available to them would improve their class performance. We designed a novel Strategic Resource Use intervention that students could self-administer online and tested its effects in two cohorts of a college-level introductory statistics class. Before each exam, students randomly assigned to the treatment condition strategized about which academic resources they would use for studying, why each resource would be useful, and how they would use their resources. Students randomly assigned to the treatment condition reported being more self-reflective about their learning throughout the class, used their resources more effectively, and outperformed students in the control condition by an average of one third of a letter grade in the class.

  14. Development of Human Resources Using New Technologies in Long-Life Learning

    Directory of Open Access Journals (Sweden)

    Micu Bogdan Ghilic

    2011-01-01

    Full Text Available Information and communication technologies (ICT offer new opportunities to reinvent the education and to make people and makes learning more fun and contemporary but poses many problems to educational institutions. Implementation of ICT determines major structural changes in the organizations and mental switch from bureaucratic mentality to customer-oriented one. In this paper I try to evaluate methods of developing the lifelong learning programs, impact to human resources training and development and the impact of this process on educational institutions. E-learning usage in training the human resources can make a new step in development of the education institutions, human resources and companies.

  15. A Video Game for Learning Brain Evolution: A Resource or a Strategy?

    Science.gov (United States)

    Barbosa Gomez, Luisa Fernanda; Bohorquez Sotelo, Maria Cristina; Roja Higuera, Naydu Shirley; Rodriguez Mendoza, Brigitte Julieth

    2016-01-01

    Learning resources are part of the educational process of students. However, how video games act as learning resources in a population that has not selected the virtual formation as their main methodology? The aim of this study was to identify the influence of a video game in the learning process of brain evolution. For this purpose, the opinions…

  16. Integrated Array/Metadata Analytics

    Science.gov (United States)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  17. A case for user-generated sensor metadata

    Science.gov (United States)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  18. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    Science.gov (United States)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  19. Metadata as a means for correspondence on digital media

    NARCIS (Netherlands)

    Stouffs, R.; Kooistra, J.; Tuncer, B.

    2004-01-01

    Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as

  20. Finding Atmospheric Composition (AC) Metadata

    Science.gov (United States)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  1. Big Rock Candy Mountain. Resources for Our Education. A Learning to Learn Catalog. Winter 1970.

    Science.gov (United States)

    Portola Inst., Inc., Menlo Park, CA.

    Imaginative learning resources of various types are reported in this catalog under the subject headings of process learning, education environments, classroom materials and methods, home learning, and self discovery. Books reviewed are on the subjects of superstition, Eastern religions, fairy tales, philosophy, creativity, poetry, child care,…

  2. Separation of metadata and pixel data to speed DICOM tag morphing.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  3. Mercury- Distributed Metadata Management, Data Discovery and Access System

    Science.gov (United States)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  4. Adaptation of mathematical educational content in e-learning resources

    Directory of Open Access Journals (Sweden)

    Yuliya V. Vainshtein

    2017-01-01

    Full Text Available Modern trends in the world electronic educational system development determine the necessity of adaptive learning intellectual environments and resources’ development and implementation. An upcoming trend in improvement the quality of studying mathematical disciplines is the development and application of adaptive electronic educational resources. However, the development and application experience of adaptive technologies in higher education is currently extremely limited and does not imply the usage flexibility. Adaptive educational resources in the electronic environment are electronic educational resources that provide the student with a personal educational space, filled with educational content that “adapts” to the individual characteristics of the students and provides them with the necessary information.This article focuses on the mathematical educational content adaptation algorithms development and their implementation in the e-learning system. The peculiarity of the proposed algorithms is the possibility of their application and distribution for adaptive e-learning resources construction. The novelty of the proposed approach is the three-step content organization of the adaptive algorithms for the educational content: “introductory adaptation of content”, “the current adaptation of content”, “estimative and a corrective adaptation”. For each stage of the proposed system, mathematical algorithms for educational content adaptation in adaptive e-learning resources are presented.Due to the high level of abstraction and complexity perception of mathematical disciplines, educational content is represented in the various editions of presentation that correspond to the levels of assimilation of the course material. Adaptation consists in the selection of the optimal edition of the material that best matches the individual characteristics of the student. The introduction of a three-step content organization of the adaptive

  5. Model of e-learning with electronic educational resources of new generation

    Directory of Open Access Journals (Sweden)

    A. V. Loban

    2017-01-01

    Full Text Available Purpose of the article: improving of scientific and methodical base of the theory of the е-learning of variability. Methods used: conceptual and logical modeling of the е-learning of variability process with electronic educational resource of new generation and system analysis of the interconnection of the studied subject area, methods, didactics approaches and information and communication technologies means. Results: the formalization complex model of the е-learning of variability with electronic educational resource of new generation is developed, conditionally decomposed into three basic components: the formalization model of the course in the form of the thesaurusclassifier (“Author of e-resource”, the model of learning as management (“Coordination. Consultation. Control”, the learning model with the thesaurus-classifier (“Student”. Model “Author of e-resource” allows the student to achieve completeness, high degree of didactic elaboration and structuring of the studied material in triples of variants: modules of education information, practical task and control tasks; the result of the student’s (author’s of e-resource activity is the thesaurus-classifier. Model of learning as management is based on the principle of personal orientation of learning in computer environment and determines the logic of interaction between the lecturer and the student when determining the triple of variants individually for each student; organization of a dialogue between the lecturer and the student for consulting purposes; personal control of the student’s success (report generation and iterative search for the concept of the class assignment in the thesaurus-classifier before acquiring the required level of training. Model “Student” makes it possible to concretize the learning tasks in relation to the personality of the student and to the training level achieved; the assumption of the lecturer about the level of training of a

  6. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  7. Mobile authoring of open educational resources for authentic learning scenarios

    NARCIS (Netherlands)

    Tabuenca, Bernardo; Kalz, Marco; Ternier, Stefaan; Specht, Marcus

    2014-01-01

    The proliferation of smartphones in the last decade and the number of publications in the field of authoring systems for computer-assisted learning depict a scenario that needs to be explored in order to facilitate the scaffolding of learning activities across contexts. Learning resources are

  8. Sustainability Learning in Natural Resource Use and Management

    Directory of Open Access Journals (Sweden)

    J. David Tàbara

    2007-12-01

    Full Text Available We contribute to the normative discussion on sustainability learning and provide a theoretical integrative framework intended to underlie the main components and interrelations of what learning is required for social learning to become sustainability learning. We demonstrate how this framework has been operationalized in a participatory modeling interface to support processes of natural resource integrated assessment and management. The key modeling components of our view are: structure (S, energy and resources (E, information and knowledge (I, social-ecological change (C, and the size, thresholds, and connections of different social-ecological systems. Our approach attempts to overcome many of the cultural dualisms that exist in the way social and ecological systems are perceived and affect many of the most common definitions of sustainability. Our approach also emphasizes the issue of limits within a total social-ecological system and takes a multiscale, agent-based perspective. Sustainability learning is different from social learning insofar as not all of the outcomes of social learning processes necessarily improve what we consider as essential for the long-term sustainability of social-ecological systems, namely, the co-adaptive systemic capacity of agents to anticipate and deal with the unintended, undesired, and irreversible negative effects of development. Hence, the main difference of sustainability learning from social learning is the content of what is learned and the criteria used to assess such content; these are necessarily related to increasing the capacity of agents to manage, in an integrative and organic way, the total social-ecological system of which they form a part. The concept of sustainability learning and the SEIC social-ecological framework can be useful to assess and communicate the effectiveness of multiple agents to halt or reverse the destructive trends affecting the life-support systems upon which all humans

  9. Impact of e-resources on learning in biochemistry: first-year medical students' perceptions.

    Science.gov (United States)

    Varghese, Joe; Faith, Minnie; Jacob, Molly

    2012-05-16

    E-learning resources (e-resources) have been widely used to facilitate self-directed learning among medical students. The Department of Biochemistry at Christian Medical College (CMC), Vellore, India, has made available e-resources to first-year medical students to supplement conventional lecture-based teaching in the subject. This study was designed to assess students' perceptions of the impact of these e-resources on various aspects of their learning in biochemistry. Sixty first-year medical students were the subjects of this study. At the end of the one-year course in biochemistry, the students were administered a questionnaire that asked them to assess the impact of the e-resources on various aspects of their learning in biochemistry. Ninety-eight percent of students had used the e-resources provided to varying extents. Most of them found the e-resources provided useful and of a high quality. The majority of them used these resources to prepare for periodic formative and final summative assessments in the course. The use of these resources increased steadily as the academic year progressed. Students said that the extent to which they understood the subject (83%) and their ability to answer questions in assessments (86%) had improved as a result of using these resources. They also said that they found biochemistry interesting (73%) and felt motivated to study the subject (59%). We found that first-year medical students extensively used the e-resources in biochemistry that were provided. They perceived that these resources had made a positive impact on various aspects of their learning in biochemistry. We conclude that e-resources are a useful supplement to conventional lecture-based teaching in the medical curriculum.

  10. Effective post-literacy learning: A question of a national human resource strategy

    Science.gov (United States)

    Ahmed, Manzoor

    1989-12-01

    Initial literacy courses must be followed by opportunities for consolidating the mechanics of literacy skills and practical application of three skills in life. Experience has shown that these `post-literacy' objectives can be achieved, not by a second stage of the literacy course, but by a range of opportunities for learning and application of learning through a network of continuing education opportunities geared to the diverse needs and circumstances of different categories of neo-literates. A taxonomy of learner categories and learning needs is seen as a basis for planning and supporting the network of post-literacy learning. Examples from China, India and Thailand demonstrate the importance of recognizing the continuity of literacy and post-literacy efforts, the need for commitment of resources for this continuum of learning, the role of an organizational structure to deal with this continuum in a coordinated way, and the value of a comprehensive range of learning opportunities for neo-literates. A necessary condition for success in building a network of continuing learning opportunities and contributing to the creation of a `learning society' is to make human resource development the core of national development. It is argued that the scope and dimensions of post-literacy continuing education are integrally linked with the goal of mass basic education and ultimately with the vision of a `learning society'. Such a vision can be a reality only with a serious human resource development focus in national development that will permit the necessary mobilization of resources, the coordination of sectors of government and society and the generation of popular enthusiasm. A radical or an incremental approach can be taken to move towards the primacy of a human resource strategy in national development. In either case, a functioning coordination and support mechanism has to be developed for the key elements of mass basic education including post-literacy learning.

  11. Exploiting Untapped Information Resources in Earth Science

    Science.gov (United States)

    Ramachandran, R.; Fox, P. A.; Kempler, S.; Maskey, M.

    2015-12-01

    One of the continuing challenges in any Earth science investigation is the amount of time and effort required for data preparation before analysis can begin. Current Earth science data and information systems have their own shortcomings. For example, the current data search systems are designed with the assumption that researchers find data primarily by metadata searches on instrument or geophysical keywords, assuming that users have sufficient knowledge of the domain vocabulary to be able to effectively utilize the search catalogs. These systems lack support for new or interdisciplinary researchers who may be unfamiliar with the domain vocabulary or the breadth of relevant data available. There is clearly a need to innovate and evolve current data and information systems in order to improve data discovery and exploration capabilities to substantially reduce the data preparation time and effort. We assert that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes. The challenge for any organization is to recognize, identify and effectively utilize the dark data stores in their institutional repositories to better serve their stakeholders. NASA Earth science metadata catalogs contain dark resources consisting of structured information, free form descriptions of data and pre-generated images. With the addition of emerging semantic technologies, such catalogs can be fully utilized beyond their original design intent of supporting current search functionality. In this presentation, we will describe our approach of exploiting these information resources to provide novel data discovery and exploration pathways to science and education communities

  12. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  13. Automation of information decision support to improve e-learning resources quality

    Directory of Open Access Journals (Sweden)

    A.L. Danchenko

    2013-06-01

    Full Text Available Purpose. In conditions of active development of e-learning the high quality of e-learning resources is very important. Providing the high quality of e-learning resources in situation with mass higher education and rapid obsolescence of information requires the automation of information decision support for improving the quality of e-learning resources by development of decision support system. Methodology. The problem is solved by methods of artificial intelligence. The knowledge base of information structure of decision support system that is based on frame model of knowledge representation and inference production rules are developed. Findings. According to the results of the analysis of life cycle processes and requirements to the e-learning resources quality the information model of the structure of the knowledge base of the decision support system, the inference rules for the automatically generating of recommendations and the software implementation are developed. Practical value. It is established that the basic requirements for quality are performance, validity, reliability and manufacturability. It is shown that the using of a software implementation of decision support system for researched courses gives a growth of the quality according to the complex quality criteria. The information structure of a knowledge base system to support decision-making and rules of inference can be used by methodologists and content developers of learning systems.

  14. Student perceptions on learning with online resources in a flipped mathematics classroom

    DEFF Research Database (Denmark)

    Triantafyllou, Eva; Timcenko, Olga

    2015-01-01

    This article discusses student perceptions of if and how online resources contribute to mathematics learning and motivation. It includes results from an online survey we conducted at the Media Technology department of Aalborg University, Copenhagen, Denmark. For this study, students were given...... links to various online resources (screencasts, online readings and quizzes, and lecture notes) for out-of-class preparation in a flipped classroom in mathematics. The survey results show support for student perceptions that online resources enhance learning, by providing visual and in depth...... explanations, and they can motivate students. However, students stated that they miss just-in-time explanations when learning with online resources and they questioned the quality and validity of some of them....

  15. Exogenous and Endogenous Learning Resources in the Actiotope Model of Giftedness and Its Significance for Gifted Education

    Science.gov (United States)

    Ziegler, Albert; Chandler, Kimberley L.; Vialle, Wilma; Stoeger, Heidrun

    2017-01-01

    Based on the Actiotope Model of Giftedness, this article introduces a learning-resource-oriented approach for gifted education. It provides a comprehensive categorization of learning resources, including five exogenous learning resources termed "educational capital" and five endogenous learning resources termed "learning…

  16. Human resource management and learning for innovation: pharmaceuticals in Mexico

    OpenAIRE

    Santiago-Rodriguez, Fernando

    2010-01-01

    This paper investigates the influence of human resource management on learning from internal and external sources of knowledge. Learning for innovation is a key ingredient of catching-up processes. The analysis builds on survey data about pharmaceutical firms in Mexico. Results show that the influence of human resource management is contingent on the knowledge flows and innovation goals pursued by the firm. Practices such as training-- particularly from external partners; and remuneration for...

  17. Sustainable Development in the Engineering Curriculum: Teaching and Learning Resources

    OpenAIRE

    Penlington, Roger; Steiner, Simon

    2014-01-01

    This repository of teaching and learning resources is a companion to the 2nd edition of “An Introduction to Sustainable Development in the Engineering Curriculum”, by Roger Penlington and Simon Steiner, originally created by The Higher Education Academy Engineering Subject Centre, Loughborough University. \\ud The purpose of this collection of teaching and learning re-sources is to provide access, with a brief resumé, to materials in curricula reform, recognition awards, and university movemen...

  18. An open repositories network development for medical teaching resources.

    Science.gov (United States)

    Soula, Gérard; Darmoni, Stefan; Le Beux, Pierre; Renard, Jean-Marie; Dahamna, Badisse; Fieschi, Marius

    2010-01-01

    The lack of interoperability between repositories of heterogeneous and geographically widespread data is an obstacle to the diffusion, sharing and reutilization of those data. We present the development of an open repositories network taking into account both the syntactic and semantic interoperability of the different repositories and based on international standards in this field. The network is used by the medical community in France for the diffusion and sharing of digital teaching resources. The syntactic interoperability of the repositories is managed using the OAI-PMH protocol for the exchange of metadata describing the resources. Semantic interoperability is based, on one hand, on the LOM standard for the description of resources and on MESH for the indexing of the latter and, on the other hand, on semantic interoperability management designed to optimize compliance with standards and the quality of the metadata.

  19. Discovery and publishing of primary biodiversity data associated with multimedia resources: The Audubon Core strategies and approaches

    Directory of Open Access Journals (Sweden)

    Robert A Morris

    2013-07-01

    Full Text Available The Audubon Core Multimedia Resource Metadata Schema is a representation-free vocabulary for the description of biodiversity multimedia resources and collections, now in the final stages as a proposed Biodiversity Informatics Standards (TDWG standard. By defining only six terms as mandatory, it seeks to lighten the burden for providing or using multimedia useful for biodiversity science. At the same time it offers rich optional metadata terms that can help curators of multimedia collections provide authoritative media that document species occurrence, ecosystems, identification tools, ontologies, and many other kinds of biodiversity documents or data. About half of the vocabulary is re-used from other relevant controlled vocabularies that are often already in use for multimedia metadata, thereby reducing the mapping burden on existing repositories. A central design goal is to allow consuming applications to have a high likelihood of discovering suitable resources, reducing the human examination effort that might be required to decide if the resource is fit for the purpose of the application.

  20. OAI-PMH repositories : quality issues regarding metadata and protocol compliance, tutorial 1

    CERN Multimedia

    CERN. Geneva; Cole, Tim

    2005-01-01

    This tutorial will provide an overview of emerging guidelines and best practices for OAI data providers and how they relate to expectations and needs of service providers. The audience should already be familiar with OAI protocol basics and have at least some experience with either data provider or service provider implementations. The speakers will present both protocol compliance best practices and general recommendations for creating and disseminating high-quality "shareable metadata". Protocol best practices discussion will include coverage of OAI identifiers, date-stamps, deleted records, sets, resumption tokens, about containers, branding, errors conditions, HTTP server issues, and repository lifecycle issues. Discussion of what makes for good, shareable metadata will cover topics including character encoding, namespace and XML schema issues, metadata crosswalk issues, support of multiple metadata formats, general metadata authoring recommendations, specific recommendations for use of Dublin Core elemen...

  1. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  2. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  3. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  4. The Use of Metadata Visualisation Assist Information Retrieval

    Science.gov (United States)

    2007-10-01

    album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is

  5. Towards a semantic learning model fostering learning object reusability

    OpenAIRE

    Fernandes , Emmanuel; Madhour , Hend; Wentland Forte , Maia; Miniaoui , Sami

    2005-01-01

    We try in this paper to propose a domain model for both author's and learner's needs concerning learning objects reuse. First of all, we present four key criteria for an efficient authoring tool: adaptive level of granularity, flexibility, integration and interoperability. Secondly, we introduce and describe our six-level Semantic Learning Model (SLM) designed to facilitate multi-level reuse of learning materials and search by defining a multi-layer model for metadata. Finally, after mapping ...

  6. Impact of e-resources on learning in biochemistry: first-year medical students’ perceptions

    Science.gov (United States)

    2012-01-01

    Background E-learning resources (e-resources) have been widely used to facilitate self-directed learning among medical students. The Department of Biochemistry at Christian Medical College (CMC), Vellore, India, has made available e-resources to first-year medical students to supplement conventional lecture-based teaching in the subject. This study was designed to assess students’ perceptions of the impact of these e-resources on various aspects of their learning in biochemistry. Methods Sixty first-year medical students were the subjects of this study. At the end of the one-year course in biochemistry, the students were administered a questionnaire that asked them to assess the impact of the e-resources on various aspects of their learning in biochemistry. Results Ninety-eight percent of students had used the e-resources provided to varying extents. Most of them found the e-resources provided useful and of a high quality. The majority of them used these resources to prepare for periodic formative and final summative assessments in the course. The use of these resources increased steadily as the academic year progressed. Students said that the extent to which they understood the subject (83%) and their ability to answer questions in assessments (86%) had improved as a result of using these resources. They also said that they found biochemistry interesting (73%) and felt motivated to study the subject (59%). Conclusions We found that first-year medical students extensively used the e-resources in biochemistry that were provided. They perceived that these resources had made a positive impact on various aspects of their learning in biochemistry. We conclude that e-resources are a useful supplement to conventional lecture-based teaching in the medical curriculum. PMID:22510159

  7. Linked data for libraries, archives and museums how to clean, link and publish your metadata

    CERN Document Server

    Hooland, Seth van

    2014-01-01

    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited re...

  8. Ready to put metadata on the post-2015 development agenda? Linking data publications to responsible innovation and science diplomacy.

    Science.gov (United States)

    Özdemir, Vural; Kolker, Eugene; Hotez, Peter J; Mohin, Sophie; Prainsack, Barbara; Wynne, Brian; Vayena, Effy; Coşkun, Yavuz; Dereli, Türkay; Huzair, Farah; Borda-Rodriguez, Alexander; Bragazzi, Nicola Luigi; Faris, Jack; Ramesar, Raj; Wonkam, Ambroise; Dandara, Collet; Nair, Bipin; Llerena, Adrián; Kılıç, Koray; Jain, Rekha; Reddy, Panga Jaipal; Gollapalli, Kishore; Srivastava, Sanjeeva; Kickbusch, Ilona

    2014-01-01

    Metadata refer to descriptions about data or as some put it, "data about data." Metadata capture what happens on the backstage of science, on the trajectory from study conception, design, funding, implementation, and analysis to reporting. Definitions of metadata vary, but they can include the context information surrounding the practice of science, or data generated as one uses a technology, including transactional information about the user. As the pursuit of knowledge broadens in the 21(st) century from traditional "science of whats" (data) to include "science of hows" (metadata), we analyze the ways in which metadata serve as a catalyst for responsible and open innovation, and by extension, science diplomacy. In 2015, the United Nations Millennium Development Goals (MDGs) will formally come to an end. Therefore, we propose that metadata, as an ingredient of responsible innovation, can help achieve the Sustainable Development Goals (SDGs) on the post-2015 agenda. Such responsible innovation, as a collective learning process, has become a key component, for example, of the European Union's 80 billion Euro Horizon 2020 R&D Program from 2014-2020. Looking ahead, OMICS: A Journal of Integrative Biology, is launching an initiative for a multi-omics metadata checklist that is flexible yet comprehensive, and will enable more complete utilization of single and multi-omics data sets through data harmonization and greater visibility and accessibility. The generation of metadata that shed light on how omics research is carried out, by whom and under what circumstances, will create an "intervention space" for integration of science with its socio-technical context. This will go a long way to addressing responsible innovation for a fairer and more transparent society. If we believe in science, then such reflexive qualities and commitments attained by availability of omics metadata are preconditions for a robust and socially attuned science, which can then remain broadly

  9. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  10. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    Science.gov (United States)

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  11. Statistical Data Processing with R – Metadata Driven Approach

    Directory of Open Access Journals (Sweden)

    Rudi SELJAK

    2016-06-01

    Full Text Available In recent years the Statistical Office of the Republic of Slovenia has put a lot of effort into re-designing its statistical process. We replaced the classical stove-pipe oriented production system with general software solutions, based on the metadata driven approach. This means that one general program code, which is parametrized with process metadata, is used for data processing for a particular survey. Currently, the general program code is entirely based on SAS macros, but in the future we would like to explore how successfully statistical software R can be used for this approach. Paper describes the metadata driven principle for data validation, generic software solution and main issues connected with the use of statistical software R for this approach.

  12. Circular dichroism spectral data and metadata in the Protein Circular Dichroism Data Bank (PCDDB): a tutorial guide to accession and deposition.

    Science.gov (United States)

    Janes, Robert W; Miles, A J; Woollett, B; Whitmore, L; Klose, D; Wallace, B A

    2012-09-01

    The Protein Circular Dichroism Data Bank (PCDDB) is a web-based resource containing circular dichroism (CD) and synchrotron radiation circular dichroism spectral and associated metadata located at http://pcddb.cryst.bbk.ac.uk. This resource provides a freely available, user-friendly means of accessing validated CD spectra and their associated experimental details and metadata, thereby enabling broad usage of this material and new developments across the structural biology, chemistry, and bioinformatics communities. The resource also enables researchers utilizing CD as an experimental technique to have a means of storing their data at a secure site from which it is easily retrievable, thereby making their results publicly accessible, a current requirement of many grant-funding agencies world-wide, as well as meeting the data-sharing requirements for journal publications. This tutorial provides extensive information on searching, accessing, and downloading procedures for those who wish to utilize the data available in the data bank, and detailed information on deposition procedures for creating and validating entries, including comprehensive explanations of their contents and formats, for those who wish to include their data in the data bank. Copyright © 2012 Wiley Periodicals, Inc.

  13. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  14. Progress Report on the Airborne Metadata and Time Series Working Groups of the 2016 ESDSWG

    Science.gov (United States)

    Evans, K. D.; Northup, E. A.; Chen, G.; Conover, H.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.

    2016-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from users' experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, including MEaSUREs and ROSS. Participants include NASA information technology experts, affiliated contractor staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. The Airborne Metadata Working Group is evaluating the suitability of the current Common Metadata Repository (CMR) and Unified Metadata Model (UMM) for airborne data sets and to develop new recommendations as necessary. The overarching goal is to enhance the usability, interoperability, discovery and distribution of airborne observational data sets. This will be done by assessing the suitability (gaps) of the current UMM model for airborne data using lessons learned from current and past field campaigns, listening to user needs and community recommendations and assessing the suitability of ISO metadata and other standards to fill the gaps. The Time Series Working Group (TSWG) is a continuation of the 2015 Time Series/WaterML2 Working Group. The TSWG is using a case study-driven approach to test the new Open Geospatial Consortium (OGC) TimeseriesML standard to determine any deficiencies with respect to its ability to fully describe and encode NASA earth observation-derived time series data. To do this, the time series working group is engaging with the OGC TimeseriesML Standards Working Group (SWG) regarding unsatisfied needs and possible solutions. The effort will end with the drafting of an OGC Engineering Report based on the use cases and interactions with the OGC TimeseriesML SWG. Progress towards finalizing

  15. Electronic Resource Management Systems

    Directory of Open Access Journals (Sweden)

    Mark Ellingsen

    2004-10-01

    Full Text Available Computer applications which deal with electronic resource management (ERM are quite a recent development. They have grown out of the need to manage the burgeoning number of electronic resources particularly electronic journals. Typically, in the early years of e-journal acquisition, library staff provided an easy means of accessing these journals by providing an alphabetical list on a web page. Some went as far as categorising the e-journals by subject and then grouping the journals either on a single web page or by using multiple pages. It didn't take long before it was recognised that it would be more efficient to dynamically generate the pages from a database rather than to continually edit the pages manually. Of course, once the descriptive metadata for an electronic journal was held within a database the next logical step was to provide administrative forms whereby that metadata could be manipulated. This in turn led to demands for incorporating more information and more functionality into the developing application.

  16. Getting educated: e-learning resources in the design and execution of surgical trials.

    Science.gov (United States)

    Bains, Simrit

    2009-01-01

    An evidence-based approach to research, which includes important aspects such as critical appraisal, is essential for the effective conduct of clinical trials. Researchers who are interested in educating themselves about its principles in order to incorporate them into their trials face challenges when attempting to acquire this information from traditional learning sources. E-learning resources offer an intriguing possibility of overcoming the challenges posed by traditional learning, and show promise as a way to expand accessibility to quality education about evidence-based principles. An assessment of existing e-learning resources reveals positive educational avenues for researchers, although significant flaws exist. The Global EducatorTM by Global Research Solutions addresses many of these flaws and is an e-learning resource that combines convenience with comprehensiveness.

  17. Mediated learning in the workplace: student perspectives on knowledge resources.

    Science.gov (United States)

    Shanahan, Madeleine

    2015-01-01

    In contemporary clinical practice, student radiographers can use many types of knowledge resources to support their learning. These include workplace experts, digital and nondigital information sources (eg, journals, textbooks, and the Internet), and electronic communication tools such as e-mail and social media. Despite the range of knowledge tools available, there is little available data about radiography students' use of these resources during clinical placement. A 68-item questionnaire was distributed to 62 students enrolled in an Australian university undergraduate radiography program after they completed a clinical placement. Researchers used descriptive statistics to analyze student access to workplace experts and their use of digital and nondigital information sources and electronic communication tools. A 5-point Likert scale (1 = very important; 5 = not important) was used to assess the present importance and perceived future value of knowledge tools for workplace learning. Of the 53 students who completed and returned the questionnaire anonymously, most rely on the knowledge of practicing technologists and on print and electronic information sources to support their learning; some students also use electronic communication tools. Students perceive that these knowledge resources also will be important tools for their future learning as qualified health professionals. The findings from this study present baseline data regarding the value students attribute to multiple knowledge tools and regarding student access to and use of these tools during clinical placement. In addition, most students have access to multiple knowledge tools in the workplace and incorporate these tools simultaneously into their overall learning practice during clinical placement. Although a range of knowledge tools is used in the workplace to support learning among student radiographers, the quality of each tool should be critically analyzed before it is adopted in practice

  18. An International Survey of Veterinary Students to Assess Their Use of Online Learning Resources.

    Science.gov (United States)

    Gledhill, Laura; Dale, Vicki H M; Powney, Sonya; Gaitskell-Phillips, Gemma H L; Short, Nick R M

    Today's veterinary students have access to a wide range of online resources that support self-directed learning. To develop a benchmark of current global student practice in e-learning, this study measured self-reported access to, and use of, these resources by students internationally. An online survey was designed and promoted via veterinary student mailing lists and international organizations, resulting in 1,070 responses. Analysis of survey data indicated that students now use online resources in a wide range of ways to support their learning. Students reported that access to online veterinary learning resources was now integral to their studies. Almost all students reported using open educational resources (OERs). Ownership of smartphones was widespread, and the majority of respondents agreed that the use of mobile devices, or m-learning, was essential. Social media were highlighted as important for collaborating with peers and sharing knowledge. Constraints to e-learning principally related to poor or absent Internet access and limited institutional provision of computer facilities. There was significant geographical variation, with students from less developed countries disadvantaged by limited access to technology and networks. In conclusion, the survey provides an international benchmark on the range and diversity in terms of access to, and use of, online learning resources by veterinary students globally. It also highlights the inequalities of access among students in different parts of the world.

  19. The use of public health e-learning resources by pharmacists in Wales: a quantitative evaluation.

    Science.gov (United States)

    Evans, Andrew; Evans, Sian; Roberts, Debra

    2016-08-01

    The aim of this study was to examine how communicable disease e-learning resources were utilised by pharmacy professionals and to identify whether uptake of the resources was influenced by disease outbreaks. Retrospective analysis of routine data regarding the number of individuals completing e-learning resources and statutory notifications of communicable disease. A high proportion of pharmacy professionals in Wales (38.8%, n = 915/2357) accessed the resources; around one in six completed multiple resources (n = 156). The most commonly accessed were those where there had been a disease outbreak during the study period. There was a strong positive correlation between e-learning uptake and number of disease cases; this was observed both for measles and scarlet fever. Communicable disease e-learning appears to be an acceptable method for providing communicable disease information to pharmacy professionals. Study findings suggest that e-learning uptake is positively influenced by disease outbreaks this reflects well both on pharmacy professionals and on the e-learning resources themselves. © 2016 Royal Pharmaceutical Society.

  20. Digital Cadavers: Online 2D Learning Resources Enhance Student Learning in Practical Head and Neck Anatomy within Dental Programs

    Directory of Open Access Journals (Sweden)

    Mahmoud M. Bakr

    2016-01-01

    Full Text Available Head and neck anatomy provides core concepts within preclinical dental curricula. Increased student numbers, reduced curricula time, and restricted access to laboratory-based human resources have increased technology enhanced learning approaches to support student learning. Potential advantages include cost-effectiveness, off-campus access, and self-directed review or mastery opportunities for students. This study investigated successful student learning within a first-year head and neck anatomy course at the School of Dentistry and Oral Health, Griffith University, Australia, taught by the same teaching team, between 2010 and 2015. Student learning success was compared, for cohorts before and after implementation of a supplementary, purpose-designed online digital library and quiz bank. Success of these online resources was confirmed using overall students’ performance within the course assessment tasks and Student Evaluation of Course surveys and online access data. Engagement with these supplementary 2D online resources, targeted at improving laboratory study, was positively evaluated by students (mean 85% and significantly increased their laboratory grades (mean difference 6%, P<0.027, despite being assessed using cadaveric resources. Written assessments in final exams were not significantly improved. Expanded use of supplementary online resources is planned to support student learning and success in head and neck anatomy, given the success of this intervention.

  1. Introducing RFID at Middlesex University Learning Resources

    Science.gov (United States)

    Hopkinson, Alan; Chandrakar, Rajesh

    2006-01-01

    Purpose: To describe the first year of the implementation of radio frequency identification (RFID) in Middlesex University Learning Resources. Design/methodology/approach: The technology is explained in detail to set the scene. Information on the implementation is presented in chronological order. Findings: Problems which would generally be…

  2. Relationship between learning resources and student's academic ...

    African Journals Online (AJOL)

    The study investigated relationship between learning resources and student's academic achievement in science subjects in Taraba State Secondary Schools. A total of 35 science teachers and 18 science head of departments from 6 schools from three geopolitical zones of Taraba State were involved in the study.

  3. Diagnostic imaging learning resources evaluated by students and recent graduates.

    Science.gov (United States)

    Alexander, Kate; Bélisle, Marilou; Dallaire, Sébastien; Fernandez, Nicolas; Doucet, Michèle

    2013-01-01

    Many learning resources can help students develop the problem-solving abilities and clinical skills required for diagnostic imaging. This study explored veterinary students' perceptions of the usefulness of a variety of learning resources. Perceived resource usefulness was measured for different levels of students and for academic versus clinical preparation. Third-year (n=139) and final (fifth) year (n=105) students and recent graduates (n=56) completed questionnaires on perceived usefulness of each resource. Resources were grouped for comparison: abstract/low complexity (e.g., notes, multimedia presentations), abstract/high complexity (e.g., Web-based and film case repositories), concrete/low complexity (e.g., large-group "clicker" workshops), and concrete/high complexity (e.g., small-group interpretation workshops). Lower-level students considered abstract/low-complexity resources more useful for academic preparation and concrete resources more useful for clinical preparation. Higher-level students/recent graduates also considered abstract/low-complexity resources more useful for academic preparation. For all levels, lecture notes were considered highly useful. Multimedia slideshows were an interactive complement to notes. The usefulness of a Web-based case repository was limited by accessibility problems and difficulty. Traditional abstract/low-complexity resources were considered useful for more levels and contexts than expected. Concrete/high-complexity resources need to better represent clinical practice to be considered more useful for clinical preparation.

  4. Sharing learning experiences through correspondence on the WWW

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Okamoto, T; Hartley, R.; Kinshuk, T.; Klus, J.P.

    2001-01-01

    Asynchronous learning networks are facilities and procedures to allow members of learning communities to be more effective and efficient in their learning. One approach is to see how the `sharing' of knowledge can be augmented through meta-data descriptions attached to portfolios and project work.

  5. The Genomes On Line Database (GOLD) in 2009: status of genomic and metagenomic projects and their associated metadata

    Science.gov (United States)

    Liolios, Konstantinos; Chen, I-Min A.; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M.; Kyrpides, Nikos C.

    2010-01-01

    The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/ PMID:19914934

  6. Collaborative Learning in Practice: Examples from Natural Resource ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-12-01

    Dec 1, 2010 ... Case studies show how, through joint efforts with researchers and other actors, local ... address and learn from challenges in managing natural resources. ... health, and health systems research relevant to the emerging crisis.

  7. DESIGN AND PRACTICE ON METADATA SERVICE SYSTEM OF SURVEYING AND MAPPING RESULTS BASED ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-08-01

    Full Text Available Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for

  8. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  9. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  10. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    Science.gov (United States)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight

  11. Converting ODM Metadata to FHIR Questionnaire Resources.

    Science.gov (United States)

    Doods, Justin; Neuhaus, Philipp; Dugas, Martin

    2016-01-01

    Interoperability between systems and data sharing between domains is becoming more and more important. The portal medical-data-models.org offers more than 5.300 UMLS annotated forms in CDISC ODM format in order to support interoperability, but several additional export formats are available. CDISC's ODM and HL7's framework FHIR Questionnaire resource were analyzed, a mapping between elements created and a converter implemented. The developed converter was integrated into the portal with FHIR Questionnaire XML or JSON download options. New FHIR applications can now use this large library of forms.

  12. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  13. Reinforcement learning techniques for controlling resources in power networks

    Science.gov (United States)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  14. E-Learning – Using XML technologies to meet the special characteristics of higher education

    Directory of Open Access Journals (Sweden)

    Igor Kanovsky

    2004-02-01

    Full Text Available In this paper we claim that the current approach to learning objects and metadata standards is counter productive for the integration of e-learning in higher education. We explain why higher education is different with regard to E-learning and we suggest an approach that avoids the use of global standards and favors an approach of an evolving set of metadata tags for an evolving community of practice. We demonstrate how XML technologies and some minimal technical help for the participating teachers can provide the required foundation for a productive process of integrating E-learning in higher education.

  15. Fostering Environmental Knowledge and Action through Online Learning Resources

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2010-01-01

    In order to secure correct understanding of environmental issues, to promote behavioral change and to encourage environmental action, more and more educational practices support and provide environmental programs. This article explores the design of online learning resources created for teachers...... and students by the GreenLearning environmental education program. The topic is approached from a social semiotic perspective. I conduct a multimodal analysis of the knowledge processes and the knowledge selection types that characterize the GreenLearning environmental education program and its online...

  16. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  17. ncISO Facilitating Metadata and Scientific Data Discovery

    Science.gov (United States)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  18. Dental and Medical Students' Use and Perceptions of Learning Resources in a Human Physiology Course.

    Science.gov (United States)

    Tain, Monica; Schwartzstein, Richard; Friedland, Bernard; Park, Sang E

    2017-09-01

    The aim of this study was to determine the use and perceived utility of various learning resources available during the first-year Integrated Human Physiology course at the dental and medical schools at Harvard University. Dental and medical students of the Class of 2018 were surveyed anonymously online in 2015 regarding their use of 29 learning resources in this combined course. The learning resources had been grouped into four categories to discern frequency of use and perceived usefulness among the categories. The survey was distributed to 169 students, and 73 responded for a response rate of 43.2%. There was no significant difference among the learning resource categories in frequency of use; however, there was a statistically significant difference among categories in students' perceptions of usefulness. No correlation was found between frequency of use and perceived usefulness of each category. Students seemingly were not choosing the most useful resources for them. These results suggest that, in the current educational environment, where new technologies and self-directed learning are highly sought after, there remains a need for instructor-guided learning.

  19. Measuring Social Learning in Participatory Approaches to Natural Resource Management

    NARCIS (Netherlands)

    Wal, van der M.M.; Kraker, de J.; Offermans, A.; Kroeze, C.; Kirschner, P.; Ittersum, van M.K.

    2014-01-01

    The role of social learning as a governance mechanism in natural resource management has been frequently highlighted, but progress in finding evidence for this role and gaining insight into the conditions that promote it are hampered by the lack of operational definitions of social learning and

  20. Measuring social learning in participatory approaches to natural resource management.

    NARCIS (Netherlands)

    Van der Wal, Merel; De Kraker, Joop; Offermans, Astrid; Kroeze, Carolien; Kirschner, Paul A.; Van Ittersum, Martin

    2018-01-01

    The role of social learning as a governance mechanism in natural resource management has been frequently highlighted, but progress in finding evidence for this role and gaining insight into the conditions that promote it are hampered by the lack of operational definitions of social learning and

  1. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    OpenAIRE

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes...

  2. Improving Earth Science Metadata: Modernizing ncISO

    Science.gov (United States)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  3. Enhanced Resource Descriptions Help Learning Matrix Users.

    Science.gov (United States)

    Roempler, Kimberly S.

    2003-01-01

    Describes the Learning Matrix digital library which focuses on improving the preparation of math and science teachers by supporting faculty who teach introductory math and science courses in two- and four-year colleges. Suggests it is a valuable resource for school library media specialists to support new science and math teachers. (LRW)

  4. Enhancing Media Personalization by Extracting Similarity Knowledge from Metadata

    DEFF Research Database (Denmark)

    Butkus, Andrius

    be seen as a cognitive foundation for modeling concepts. Conceptual Spaces is applied in this thesis to analyze media in terms of its dimensions and knowledge domains, which in return defines properties and concepts. One of the most important domains in terms of describing media is the emotional one...... only “more of the same” type of content which does not necessarily lead to the meaningful personalization. Another way to approach similarity is to find a similar underlying meaning in the content. Aspects of meaning in media can be represented using Gardenfors Conceptual Spaces theory, which can......) using Latent Semantic Analysis (one of the unsupervised machine learning techniques). It presents three separate cases to illustrate the similarity knowledge extraction from the metadata, where the emotional components in each case represents different abstraction levels – genres, synopsis and lyrics...

  5. mockrobiota: a Public Resource for Microbiome Bioinformatics Benchmarking.

    Science.gov (United States)

    Bokulich, Nicholas A; Rideout, Jai Ram; Mercurio, William G; Shiffer, Arron; Wolfe, Benjamin; Maurice, Corinne F; Dutton, Rachel J; Turnbaugh, Peter J; Knight, Rob; Caporaso, J Gregory

    2016-01-01

    Mock communities are an important tool for validating, optimizing, and comparing bioinformatics methods for microbial community analysis. We present mockrobiota, a public resource for sharing, validating, and documenting mock community data resources, available at http://caporaso-lab.github.io/mockrobiota/. The materials contained in mockrobiota include data set and sample metadata, expected composition data (taxonomy or gene annotations or reference sequences for mock community members), and links to raw data (e.g., raw sequence data) for each mock community data set. mockrobiota does not supply physical sample materials directly, but the data set metadata included for each mock community indicate whether physical sample materials are available. At the time of this writing, mockrobiota contains 11 mock community data sets with known species compositions, including bacterial, archaeal, and eukaryotic mock communities, analyzed by high-throughput marker gene sequencing. IMPORTANCE The availability of standard and public mock community data will facilitate ongoing method optimizations, comparisons across studies that share source data, and greater transparency and access and eliminate redundancy. These are also valuable resources for bioinformatics teaching and training. This dynamic resource is intended to expand and evolve to meet the changing needs of the omics community.

  6. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    Science.gov (United States)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  7. Predicting age groups of Twitter users based on language and metadata features.

    Directory of Open Access Journals (Sweden)

    Antonio A Morgan-Lopez

    Full Text Available Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1 while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score. Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may

  8. Desktop Publishing as a Learning Resources Service.

    Science.gov (United States)

    Drake, David

    In late 1988, Midland College in Texas implemented a desktop publishing service to produce instructional aids and reduce and complement the workload of the campus print shop. The desktop service was placed in the Media Services Department of the Learning Resource Center (LRC) for three reasons: the LRC was already established as a campus-wide…

  9. The Genomes OnLine Database (GOLD) v.4: status of genomic and metagenomic projects and their associated metadata

    Science.gov (United States)

    Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A.; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M.; Kyrpides, Nikos C.

    2012-01-01

    The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11 472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond. PMID:22135293

  10. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    Science.gov (United States)

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  11. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    Science.gov (United States)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  12. Ontology-based Metadata Portal for Unified Semantics

    Data.gov (United States)

    National Aeronautics and Space Administration — The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS) will extend the prototype Ontology-Driven Interactive Search Environment for Earth Sciences...

  13. Drivers and Effects of Enterprise Resource Planning Post-Implementation Learning

    Science.gov (United States)

    Chang, Hsiu-Hua; Chou, Huey-Wen

    2011-01-01

    The use of enterprise resource planning (ERP) systems has grown enormously since 1990, but the failure to completely learn how to use them continues to produce disappointing results. Today's rapidly changing business environment and the integrative applications of ERP systems force users to continuously learn new skills after ERP implementation.…

  14. Linguistic Resources and Cats: How to Use ISOcat, RELcat and SCHEMAcat

    NARCIS (Netherlands)

    Windhouwer, Menzo; Schuurman, Ineke; Chair), Nicoletta Calzolari (Conference; Choukri, Khalid; Declerck, Thierry; Loftsson, Hrafn; Maegaard, Bente; Mariani, Joseph; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios

    2014-01-01

    Within the European CLARIN infrastructure ISOcat is used to enable both humans and computer programs to find specific resources even when they use different terminology or data structures. In order to do so, it should be clear which concepts are used in these resources, both at the level of metadata

  15. Radiological dose and metadata management

    International Nuclear Information System (INIS)

    Walz, M.; Madsack, B.; Kolodziej, M.

    2016-01-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented. (orig.) [de

  16. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  17. submitter BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences

    CERN Document Server

    McQuilton, Peter; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the ...

  18. Work in the classrooms with European perspective: materials and resources for autonomus learning

    Directory of Open Access Journals (Sweden)

    Manuela RAPOSO RIVAS

    2011-04-01

    Full Text Available One of the key principles set forth in the bologna process is to focus teaching on students, by getting involved actively and independently in their learning process and in developing their skills. This requires the use of teaching and learning methods together with materials and resources to motivate and guide them. In this paper, we present three of them, from our experience in adapting the subject of «New Technologies Applied to education» to the ecTs system, which we have found useful for guiding and evaluating the learning process and promote the intended learning. These materials and resources are: «learning guides», the portfolio and the rubric.

  19. Resource Letter ALIP-1: Active-Learning Instruction in Physics

    Science.gov (United States)

    Meltzer, David E.; Thornton, Ronald K.

    2012-06-01

    This Resource Letter provides a guide to the literature on research-based active-learning instruction in physics. These are instructional methods that are based on, assessed by, and validated through research on the teaching and learning of physics. They involve students in their own learning more deeply and more intensely than does traditional instruction, particularly during class time. The instructional methods and supporting body of research reviewed here offer potential for significantly improved learning in comparison to traditional lecture-based methods of college and university physics instruction. We begin with an introduction to the history of active learning in physics in the United States, and then discuss some methods for and outcomes of assessing pedagogical effectiveness. We enumerate and describe common characteristics of successful active-learning instructional strategies in physics. We then discuss a range of methods for introducing active-learning instruction in physics and provide references to those methods for which there is published documentation of student learning gains.

  20. Competition for resources can explain patterns of social and individual learning in nature.

    Science.gov (United States)

    Smolla, Marco; Gilman, R Tucker; Galla, Tobias; Shultz, Susanne

    2015-09-22

    In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. © 2015 The Author(s).

  1. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  2. phosphorus retention data and metadata

    Science.gov (United States)

    phosphorus retention in wetlands data and metadataThis dataset is associated with the following publication:Lane , C., and B. Autrey. Phosphorus retention of forested and emergent marsh depressional wetlands in differing land uses in Florida, USA. Wetlands Ecology and Management. Springer Science and Business Media B.V;Formerly Kluwer Academic Publishers B.V., GERMANY, 24(1): 45-60, (2016).

  3. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    Science.gov (United States)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  4. Data catalog project—A browsable, searchable, metadata system

    International Nuclear Information System (INIS)

    Stillerman, Joshua; Fredian, Thomas; Greenwald, Martin; Manduchi, Gabriele

    2016-01-01

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  5. Data catalog project—A browsable, searchable, metadata system

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Fredian, Thomas; Greenwald, Martin [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Manduchi, Gabriele [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2016-11-15

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  6. How Important Are Student-Selected versus Instructor-Selected Literature Resources for Students' Learning and Motivation in Problem-Based Learning?

    Science.gov (United States)

    Wijnia, Lisette; Loyens, Sofie M.; Derous, Eva; Schmidt, Henk G.

    2015-01-01

    In problem-based learning students are responsible for their own learning process, which becomes evident when they must act independently, for example, when selecting literature resources for individual study. It is a matter of debate whether it is better to have students select their own literature resources or to present them with a list of…

  7. Title, Description, and Subject are the Most Important Metadata Fields for Keyword Discoverability

    Directory of Open Access Journals (Sweden)

    Laura Costello

    2016-09-01

    Full Text Available A Review of: Yang, L. (2016. Metadata effectiveness in internet discovery: An analysis of digital collection metadata elements and internet search engine keywords. College & Research Libraries, 77(1, 7-19. http://doi.org/10.5860/crl.77.1.7 Objective – To determine which metadata elements best facilitate discovery of digital collections. Design – Case study. Setting – A public research university serving over 32,000 graduate and undergraduate students in the Southwestern United States of America. Subjects – A sample of 22,559 keyword searches leading to the institution’s digital repository between August 1, 2013, and July 31, 2014. Methods – The author used Google Analytics to analyze 73,341 visits to the institution’s digital repository. He determined that 22,559 of these visits were due to keyword searches. Using Random Integer Generator, the author identified a random sample of 378 keyword searches. The author then matched the keywords with the Dublin Core and VRA Core metadata elements on the landing page in the digital repository to determine which metadata field had drawn the keyword searcher to that particular page. Many of these keywords matched to more than one metadata field, so the author also analyzed the metadata elements that generated unique keyword hits and those fields that were frequently matched together. Main Results – Title was the most matched metadata field with 279 matched keywords from searches. Description and Subject were also significant fields with 208 and 79 matches respectively. Slightly more than half of the results, 195 keywords, matched the institutional repository in one field only. Both Title and Description had significant match rates both independently and in conjunction with other elements, but Subject keywords were the sole match in only three of the sampled cases. Conclusion – The Dublin Core elements of Title, Description, and Subject were the most frequently matched fields in keyword

  8. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  9. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  10. DELIVERing Library Resources to the Virtual Learning Environment

    Science.gov (United States)

    Secker, Jane

    2005-01-01

    Purpose: Examines a project to integrate digital libraries and virtual learning environments (VLE) focusing on requirements for online reading list systems. Design/methodology/approach: Conducted a user needs analysis using interviews and focus groups and evaluated three reading or resource list management systems. Findings: Provides a technical…

  11. Leveraging Metadata to Create Interactive Images... Today!

    Science.gov (United States)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  12. Learning with Nature and Learning from Others: Nature as Setting and Resource for Early Childhood Education

    Science.gov (United States)

    MacQuarrie, Sarah; Nugent, Clare; Warden, Claire

    2015-01-01

    Nature-based learning is an increasingly popular type of early childhood education. Despite this, children's experiences--in particular, their form and function within different settings and how they are viewed by practitioners--are relatively unknown. Accordingly, the use of nature as a setting and a resource for learning was researched. A…

  13. Questions of quality in repositories of open educational resources: a literature review

    Directory of Open Access Journals (Sweden)

    Javiera Atenas

    2014-07-01

    Full Text Available Open educational resources (OER are teaching and learning materials which are freely available and openly licensed. Repositories of OER (ROER are platforms that host and facilitate access to these resources. ROER should not just be designed to store this content – in keeping with the aims of the OER movement, they should support educators in embracing open educational practices (OEP such as searching for and retrieving content that they will reuse, adapt or modify as needed, without economic barriers or copyright restrictions. This paper reviews key literature on OER and ROER, in order to understand the roles ROER are said or supposed to fulfil in relation to furthering the aims of the OER movement. Four themes which should shape repository design are identified, and the following 10 quality indicators (QI for ROER effectiveness are discussed: featured resources; user evaluation tools; peer review; authorship of the resources; keywords of the resources; use of standardised metadata; multilingualism of the repositories; inclusion of social media tools; specification of the creative commons license; availability of the source code or original files. These QI form the basis of a method for the evaluation of ROER initiatives which, in concert with considerations of achievability and long-term sustainability, should assist in enhancement and development.

  14. {Semantic metadata application for information resources systematization in water spectroscopy} A.Fazliev (1), A.Privezentsev (1), J.Tennyson (2) (1) Institute of Atmospheric Optics SB RAS, Tomsk, Russia, (2) University College London, London, UK (faz@iao

    Science.gov (United States)

    Fazliev, A.

    2009-04-01

    The information and knowledge layers of information-computational system for water spectroscopy are described. Semantic metadata for all the tasks of domain information model that are the basis of the layers have been studied. The principle of semantic metadata determination and mechanisms of the usage during information systematization in molecular spectroscopy has been revealed. The software developed for the work with semantic metadata is described as well. Formation of domain model in the framework of Semantic Web is based on the use of explicit specification of its conceptualization or, in other words, its ontologies. Formation of conceptualization for molecular spectroscopy was described in Refs. 1, 2. In these works two chains of task are selected for zeroth approximation for knowledge domain description. These are direct tasks chain and inverse tasks chain. Solution schemes of these tasks defined approximation of data layer for knowledge domain conceptualization. Spectroscopy tasks solutions properties lead to a step-by-step extension of molecular spectroscopy conceptualization. Information layer of information system corresponds to this extension. An advantage of molecular spectroscopy model designed in a form of tasks chain is actualized in the fact that one can explicitly define data and metadata at each step of solution of these molecular spectroscopy chain tasks. Metadata structure (tasks solutions properties) in knowledge domain also has form of a chain in which input data and metadata of the previous task become metadata of the following tasks. The term metadata is used in its narrow sense: metadata are the properties of spectroscopy tasks solutions. Semantic metadata represented with the help of OWL 3 are formed automatically and they are individuals of classes (A-box). Unification of T-box and A-box is an ontology that can be processed with the help of inference engine. In this work we analyzed the formation of individuals of molecular spectroscopy

  15. Can e-learning help you to connect compassionately? Commentary on a palliative care e-learning resource for India.

    Science.gov (United States)

    Datta, Soumitra Shankar; Agrawal, Sanjit

    2017-01-01

    e-learning resources need to be customised to the audience and learners to make them culturally relevant. The ' Palliative care e-learning resource for health care professionals in India' has been developed by the Karunashraya Hospice, Bengaluru in collaboration with the Cardiff Palliative Care Education Team, Wales to address the training needs of professionals in India. The resource, comprising over 20 modules, integrates psychological, social and medical care for patients requiring palliative care for cancer and other diseases. With increased internet usage, it would help in training a large number of professionals and volunteers in India who want to work in the field of palliative care.

  16. Comparative Study of Metadata Elements Used in the Website of Central Library of Universities Subordinate to the Ministry of Science, Research and Technology with the Dublin Core Metadata Elements

    Directory of Open Access Journals (Sweden)

    Kobra Babaei

    2012-03-01

    Full Text Available This research has been carried out with the aim of studying the web sites of central libraries of universities subordinate to the Ministry of Science, Research and Technology usage of metadata elements and its comparison with Dublin Core standard elements. This study was a comparative survey, in which 40 websites of academic library by using Internet Explorer browser. Then the HTML pages of these websites were seen through the Source of View menu, and metadata elements of each websites were extracted and entered in the checklist. Then, with using descriptive statistics (frequency, percentage and mean analysis of data was discussed. Research findings showed that the reviewed websites did not use any Dublin Core metadata elements, general metadata Markup language used in design of all websites, the amount of metadata elements used in website, Central Library of Ferdowsi University of Mashhad and Iran Science and Industries with 57% in first ranked and Shahid Beheshti University with 49% in second ranked and the International University of Imam Khomeini with 40% was in third ranked. The approach to web designers was determined too that as follows: the content of source in first ranked and attention to physical appearance source in second ranked and also ownership of source in third position.

  17. Assessment for Complex Learning Resources: Development and Validation of an Integrated Model

    Directory of Open Access Journals (Sweden)

    Gudrun Wesiak

    2013-01-01

    Full Text Available Today’s e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assessment forms. Therefore, learning experiences enriched with complex didactic resources - such as virtualized collaborations and serious games - have emerged. In this extension of [1] an integrated model for e-assessment (IMA is outlined, which incorporates complex learning resources and assessment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a group of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.

  18. Dealing with metadata quality: the legacy of digital library efforts

    OpenAIRE

    Tani, Alice; Candela, Leonardo; Castelli, Donatella

    2013-01-01

    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital librar...

  19. Development of inquiry-based learning activities integrated with the local learning resource to promote learning achievement and analytical thinking ability of Mathayomsuksa 3 student

    Science.gov (United States)

    Sukji, Paweena; Wichaidit, Pacharee Rompayom; Wichaidit, Sittichai

    2018-01-01

    The objectives of this study were to: 1) compare learning achievement and analytical thinking ability of Mathayomsuksa 3 students before and after learning through inquiry-based learning activities integrated with the local learning resource, and 2) compare average post-test score of learning achievement and analytical thinking ability to its cutting score. The target of this study was 23 Mathayomsuksa 3 students who were studying in the second semester of 2016 academic year from Banchatfang School, Chainat Province. Research instruments composed of: 1) 6 lesson plans of Environment and Natural Resources, 2) the learning achievement test, and 3) analytical thinking ability test. The results showed that 1) student' learning achievement and analytical thinking ability after learning were higher than that of before at the level of .05 statistical significance, and 2) average posttest score of student' learning achievement and analytical thinking ability were higher than its cutting score at the level of .05 statistical significance. The implication of this research is for science teachers and curriculum developers to design inquiry activities that relate to student's context.

  20. SM4AM: A Semantic Metamodel for Analytical Metadata

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation BI systems emerge as platforms where traditional BI tools meet semi-structured and unstructured data coming from the Web. In these settings, the user-centric orientation represents a key characteristic for the acceptance and wide usage by numerous and diverse end users in their data....... We present SM4AM, a Semantic Metamodel for Analytical Metadata created as an RDF formalization of the Analytical Metadata artifacts needed for user assistance exploitation purposes in next generation BI systems. We consider the Linked Data initiative and its relevance for user assistance...

  1. Instance Selection for Classifier Performance Estimation in Meta Learning

    OpenAIRE

    Marcin Blachnik

    2017-01-01

    Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be inform...

  2. Learning with nature and learning from others: nature as setting and resource for early childhood education

    OpenAIRE

    MacQuarrie, Sarah; Nugent, Clare; Warden, Claire

    2015-01-01

    Nature-based learning is an increasingly popular type of early childhood education. Despite this, children's experiences-in particular, their form and function within different settings and how they are viewed by practitioners-are relatively unknown. Accordingly, the use of nature as a setting and a resource for learning was researched. A description and an emerging understanding of nature-based learning were obtained through the use of a group discussion and case studies. Practitioners' view...

  3. Social media as an open-learning resource in medical education: current perspectives.

    Science.gov (United States)

    Sutherland, S; Jalali, A

    2017-01-01

    Numerous studies evaluate the use of social media as an open-learning resource in education, but there is a little published knowledge of empirical evidence that such open-learning resources produce educative outcomes, particularly with regard to student performance. This study undertook a systematic review of the published literature in medical education to determine the state of the evidence as to empirical studies that conduct an evaluation or research regarding social media and open-learning resources. The authors searched MEDLINE, ERIC, Embase, PubMed, Scopus, and Google Scholar from 2012 to 2017. This search included using keywords related to social media, medical education, research, and evaluation, while restricting the search to peer reviewed, English language articles only. To meet inclusion criteria, manuscripts had to employ evaluative methods and undertake empirical research. Empirical work designed to evaluate the impact of social media as an open-learning resource in medical education is limited as only 13 studies met inclusion criteria. The majority of these studies used undergraduate medical education as the backdrop to investigate open-learning resources, such as Facebook, Twitter, and YouTube. YouTube appears to have little educational value due to the unsupervised nature of content added on a daily basis. Overall, extant reviews have demonstrated that we know a considerable amount about social media use, although to date, its impacts remain unclear. There is a paucity of outcome-based, empirical studies assessing the impact of social media in medical education. The few empirical studies identified tend to focus on evaluating the affective outcomes of social media and medical education as opposed to understanding any linkages between social media and performance outcomes. Given the potential for social media use in medical education, more empirical evaluative studies are required to determine educational value.

  4. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  5. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    Science.gov (United States)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    The affordable availability of computing power and digital storage has been a boon for the scientific community. The hydroinformatics community has also benefitted from the so-called digital revolution, which has enabled the tackling of more and more complex physical phenomena using hydroinformatic models, instruments, sensors, etc. With models getting more and more complex, computational domains getting larger and the resolution of computational grids and measurement data getting finer, a large amount of data is generated and consumed in any hydroinformatics related project. The ubiquitous availability of internet also contributes to this phenomenon with data being collected through sensor networks connected to telecommunications networks and the internet long before the term Internet of Things existed. Although generally good, this exponential increase in the number of available datasets gives rise to the need to describe this data in a standardised way to not only be able to get a quick overview about the data but to also facilitate interoperability of data from different sources. The Federal Waterways Engineering and Research Institute (BAW) is a federal authority of the German Federal Ministry of Transport and Digital Infrastructure. BAW acts as a consultant for the safe and efficient operation of the German waterways. As part of its consultation role, BAW operates a number of physical and numerical models for sections of inland and marine waterways. In order to uniformly describe the data produced and consumed by these models throughout BAW and to ensure interoperability with other federal and state institutes on the one hand and with EU countries on the other, a metadata profile for hydroinformatic data has been developed at BAW. The metadata profile is composed in its entirety using the ISO 19115 international standard for metadata related to geographic information. Due to the widespread use of the ISO 19115 standard in the existing geodata infrastructure

  6. Creating a Framework of a Resource-Based E-Learning Environment for Science Learning in Primary Classrooms

    Science.gov (United States)

    So, Winnie W. M.

    2012-01-01

    Advancements in information and communications technology and the rapid expansion of the Internet have changed the nature and the mode of the presentation and delivery of teaching and learning resources. This paper discusses the results of a study aimed at investigating how five teachers planned to integrate online resources in their teaching of…

  7. Using Google Tag Manager and Google Analytics to track DSpace metadata fields as custom dimensions

    Directory of Open Access Journals (Sweden)

    Suzanna Conrad

    2015-01-01

    Full Text Available DSpace can be problematic for those interested in tracking download and pageview statistics granularly. Some libraries have implemented code to track events on websites and some have experimented with using Google Tag Manager to automate event tagging in DSpace. While these approaches make it possible to track download statistics, granular details such as authors, content types, titles, advisors, and other fields for which metadata exist are generally not tracked in DSpace or Google Analytics without coding. Moreover, it can be time consuming to track and assess pageview data and relate that data back to particular metadata fields. This article will detail the learning process of incorporating custom dimensions for tracking these detailed fields including trial and error attempts to use the data import function manually in Google Analytics, to automate the data import using Google APIs, and finally to automate the collection of dimension data in Google Tag Manager by mimicking SEO practices for capturing meta tags. This specific case study refers to using Google Tag Manager and Google Analytics with DSpace; however, this method may also be applied to other types of websites or systems.

  8. Listening to Students: Customer Journey Mapping at Birmingham City University Library and Learning Resources

    Science.gov (United States)

    Andrews, Judith; Eade, Eleanor

    2013-01-01

    Birmingham City University's Library and Learning Resources' strategic aim is to improve student satisfaction. A key element is the achievement of the Customer Excellence Standard. An important component of the standard is the mapping of services to improve quality. Library and Learning Resources has developed a methodology to map these…

  9. Usability Testing of a Multimedia e-Learning Resource for Electrolyte and Acid-Base Disorders

    Science.gov (United States)

    Davids, Mogamat Razeen; Chikte, Usuf; Grimmer-Somers, Karen; Halperin, Mitchell L.

    2014-01-01

    The usability of computer interfaces may have a major influence on learning. Design approaches that optimize usability are commonplace in the software development industry but are seldom used in the development of e-learning resources, especially in medical education. We conducted a usability evaluation of a multimedia resource for teaching…

  10. Structural Metadata Research in the Ears Program

    National Research Council Canada - National Science Library

    Liu, Yang; Shriberg, Elizabeth; Stolcke, Andreas; Peskin, Barbara; Ang, Jeremy; Hillard, Dustin; Ostendorf, Mari; Tomalin, Marcus; Woodland, Phil; Harper, Mary

    2005-01-01

    Both human and automatic processing of speech require recognition of more than just words. In this paper we provide a brief overview of research on structural metadata extraction in the DARPA EARS rich transcription program...

  11. Effect of improving the usability of an e-learning resource: a randomized trial.

    Science.gov (United States)

    Davids, Mogamat Razeen; Chikte, Usuf M E; Halperin, Mitchell L

    2014-06-01

    Optimizing the usability of e-learning materials is necessary to reduce extraneous cognitive load and maximize their potential educational impact. However, this is often neglected, especially when time and other resources are limited. We conducted a randomized trial to investigate whether a usability evaluation of our multimedia e-learning resource, followed by fixing of all problems identified, would translate into improvements in usability parameters and learning by medical residents. Two iterations of our e-learning resource [version 1 (V1) and version 2 (V2)] were compared. V1 was the first fully functional version and V2 was the revised version after all identified usability problems were addressed. Residents in internal medicine and anesthesiology were randomly assigned to one of the versions. Usability was evaluated by having participants complete a user satisfaction questionnaire and by recording and analyzing their interactions with the application. The effect on learning was assessed by questions designed to test the retention and transfer of knowledge. Participants reported high levels of satisfaction with both versions, with good ratings on the System Usability Scale and adjective rating scale. In contrast, analysis of video recordings revealed significant differences in the occurrence of serious usability problems between the two versions, in particular in the interactive HandsOn case with its treatment simulation, where there was a median of five serious problem instances (range: 0-50) recorded per participant for V1 and zero instances (range: 0-1) for V2 (P e-learning resource resulted in significant improvements in usability. This is likely to translate into improved motivation and willingness to engage with the learning material. In this population of relatively high-knowledge participants, learning scores were similar across the two versions. Copyright © 2014 The American Physiological Society.

  12. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  13. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    Science.gov (United States)

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  14. Use of Online Learning Resources in the Development of Learning Environments at the Intersection of Formal and Informal Learning: The Student as Autonomous Designer

    Science.gov (United States)

    Lebenicnik, Maja; Pitt, Ian; Istenic Starcic, Andreja

    2015-01-01

    Learning resources that are used in the education of university students are often available online. The nature of new technologies causes an interweaving of formal and informal learning, with the result that a more active role is expected from students with regard to the use of ICT for their learning. The variety of online learning resources…

  15. Digital Preservation and Deep Infrastructure; Dublin Core Metadata Initiative Progress Report and Workplan for 2002; Video Gaming, Education and Digital Learning Technologies: Relevance and Opportunities; Digital Collections of Real World Objects; The MusArt Music-Retrieval System: An Overview; eML: Taking Mississippi Libraries into the 21st Century.

    Science.gov (United States)

    Granger, Stewart; Dekkers, Makx; Weibel, Stuart L.; Kirriemuir, John; Lensch, Hendrik P. A.; Goesele, Michael; Seidel, Hans-Peter; Birmingham, William; Pardo, Bryan; Meek, Colin; Shifrin, Jonah; Goodvin, Renee; Lippy, Brooke

    2002-01-01

    One opinion piece and five articles in this issue discuss: digital preservation infrastructure; accomplishments and changes in the Dublin Core Metadata Initiative in 2001 and plans for 2002; video gaming and how it relates to digital libraries and learning technologies; overview of a music retrieval system; and the online version of the…

  16. A holistic model for evaluating the impact of individual technology-enhanced learning resources.

    Science.gov (United States)

    Pickering, James D; Joynes, Viktoria C T

    2016-12-01

    The use of technology within education has now crossed the Rubicon; student expectations, the increasing availability of both hardware and software and the push to fully blended learning environments mean that educational institutions cannot afford to turn their backs on technology-enhanced learning (TEL). The ability to meaningfully evaluate the impact of TEL resources nevertheless remains problematic. This paper aims to establish a robust means of evaluating individual resources and meaningfully measure their impact upon learning within the context of the program in which they are used. Based upon the experience of developing and evaluating a range of mobile and desktop based TEL resources, this paper outlines a new four-stage evaluation process, taking into account learner satisfaction, learner gain, and the impact of a resource on both the individual and the institution in which it has been adapted. A new multi-level model of TEL resource evaluation is proposed, which includes a preliminary evaluation of need, learner satisfaction and gain, learner impact and institutional impact. Each of these levels are discussed in detail, and in relation to existing TEL evaluation frameworks. This paper details a holistic, meaningful evaluation model for individual TEL resources within the specific context in which they are used. It is proposed that this model is adopted to ensure that TEL resources are evaluated in a more meaningful and robust manner than is currently undertaken.

  17. Towards an Interoperable Field Spectroscopy Metadata Standard with Extended Support for Marine Specific Applications

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-11-01

    Full Text Available This paper presents an approach to developing robust metadata standards for specific applications that serves to ensure a high level of reliability and interoperability for a spectroscopy dataset. The challenges of designing a metadata standard that meets the unique requirements of specific user communities are examined, including in situ measurement of reflectance underwater, using coral as a case in point. Metadata schema mappings from seven existing metadata standards demonstrate that they consistently fail to meet the needs of field spectroscopy scientists for general and specific applications (μ = 22%, σ = 32% conformance with the core metadata requirements and μ = 19%, σ = 18% for the special case of a benthic (e.g., coral reflectance metadataset. Issues such as field measurement methods, instrument calibration, and data representativeness for marine field spectroscopy campaigns are investigated within the context of submerged benthic measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. A hybrid standard that serves as a “best of breed” incorporating useful modules and parameters within the standards is proposed. This paper is Part 3 in a series of papers in this journal, examining the issues central to a metadata standard for field spectroscopy datasets. The results presented in this paper are an important step towards field spectroscopy metadata standards that address the specific needs of field spectroscopy data stakeholders while facilitating dataset documentation, quality assurance, discoverability and data exchange within large-scale information sharing platforms.

  18. Reference framework for integrating web resources as e-learning services in .LRN

    Directory of Open Access Journals (Sweden)

    Fabinton Sotelo Gómez

    2015-11-01

    Full Text Available The learning management platforms (LMS as Dot LRN (.LRN have been widely disseminated and used as a teaching tool. However, despite its great potential, most of these platforms do not allow easy integration of common services on the Web. Integration of external resources in LMS is critical to extend the quantity and quality of educational services LMS. This article presents a set of criteria and architectural guidelines for the integration of Web resources for e-learning in the LRN platform. To this end, three steps are performed: first; the possible integration technologies to be used are described, second; the Web resources that provide educational services and can be integrated into LMS platforms are analyzed, finally; some architectural aspects of the relevant platform are identified for integration. The main contributions of this paper are: a characterization of Web resources and educational services available today on the Web; and the definition of criteria and guidelines for the integration of Web resources to .LRN.

  19. Scientific Reproducibility in Biomedical Research: Provenance Metadata Ontology for Semantic Annotation of Study Description.

    Science.gov (United States)

    Sahoo, Satya S; Valdez, Joshua; Rueschman, Michael

    2016-01-01

    Scientific reproducibility is key to scientific progress as it allows the research community to build on validated results, protect patients from potentially harmful trial drugs derived from incorrect results, and reduce wastage of valuable resources. The National Institutes of Health (NIH) recently published a systematic guideline titled "Rigor and Reproducibility " for supporting reproducible research studies, which has also been accepted by several scientific journals. These journals will require published articles to conform to these new guidelines. Provenance metadata describes the history or origin of data and it has been long used in computer science to capture metadata information for ensuring data quality and supporting scientific reproducibility. In this paper, we describe the development of Provenance for Clinical and healthcare Research (ProvCaRe) framework together with a provenance ontology to support scientific reproducibility by formally modeling a core set of data elements representing details of research study. We extend the PROV Ontology (PROV-O), which has been recommended as the provenance representation model by World Wide Web Consortium (W3C), to represent both: (a) data provenance, and (b) process provenance. We use 124 study variables from 6 clinical research studies from the National Sleep Research Resource (NSRR) to evaluate the coverage of the provenance ontology. NSRR is the largest repository of NIH-funded sleep datasets with 50,000 studies from 36,000 participants. The provenance ontology reuses ontology concepts from existing biomedical ontologies, for example the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), to model the provenance information of research studies. The ProvCaRe framework is being developed as part of the Big Data to Knowledge (BD2K) data provenance project.

  20. OER, Resources for Learning--Experiences from an OER Project in Sweden

    Science.gov (United States)

    Ossiannilsson, Ebba S. I.; Creelman, Alastair M.

    2012-01-01

    This article aims to share experience from a Swedish project on the introduction and implementation of Open Educational Resources (OER) in higher education with both national and international perspectives. The project, "OER--resources for learning", was part of the National Library of Sweden Open Access initiative and aimed at exploring, raising…

  1. Teaching with technology: free Web resources for teaching and learning.

    Science.gov (United States)

    Wink, Diane M; Smith-Stoner, Marilyn

    2011-01-01

    In this bimonthly series, the department editor examines how nurse educators can use Internet and Web-based computer technologies such as search, communication, collaborative writing tools; social networking, and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. In this article, the department editor and her coauthor describe free Web-based resources that can be used to support teaching and learning.

  2. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  3. Evaluación de la calidad de metadatos en repositorios digitales de objetos de aprendizaje

    Directory of Open Access Journals (Sweden)

    Valentina Tabares Morales

    2013-09-01

    Full Text Available Educational digital resources stored in repositories require metadata in order to describe them properly, and are fundamental for searching and retrieval processes. The aim of this paper is to present an approach for the calculation of metrics that determine the quality level of labeling digital resources, specifically Learning Objects (LOs. We describe how to evaluate resources considering three dimensions such as completeness, consistency and coherence, based on the standard IEEE LOM. In order to validate the results that such metrics exhibit, some experimental tests were performed with synthetic and real OAs, concluding that they allows evaluating the quality of the metadata involved. This way, the proposal presented turns to be a plausible and relevant input for the automatic evaluation of metadata, as well as for the LOs repository management and recommendation processes.

  4. Model of e-learning with electronic educational resources of new generation

    OpenAIRE

    A. V. Loban; D. A. Lovtsov

    2017-01-01

    Purpose of the article: improving of scientific and methodical base of the theory of the е-learning of variability. Methods used: conceptual and logical modeling of the е-learning of variability process with electronic educational resource of new generation and system analysis of the interconnection of the studied subject area, methods, didactics approaches and information and communication technologies means. Results: the formalization complex model of the е-learning of variability with elec...

  5. An Investigation into Saudi Students' Knowledge of and Attitudes towards E-Resources on BBC Learning English

    Science.gov (United States)

    Alzahrani, Khalid Saleh

    2017-01-01

    The BBC Learning English website has become an important method of learning and studying English as a second language, a resource that enhances the importance of e-learning. The aim of the current research is to find Saudi students' knowledge of and attitude towards e-resources on BBC Learning English. The sample size was 28 participants (17 male…

  6. Plasticity in learning causes immediate and trans-generational changes in allocation of resources.

    Science.gov (United States)

    Snell-Rood, Emilie C; Davidowitz, Goggy; Papaj, Daniel R

    2013-08-01

    Plasticity in the development and expression of behavior may allow organisms to cope with novel and rapidly changing environments. However, plasticity itself may depend on the developmental experiences of an individual. For instance, individuals reared in complex, enriched environments develop enhanced cognitive abilities as a result of increased synaptic connections and neurogenesis. This suggests that costs associated with behavioral plasticity-in particular, increased investment in "self" at the expense of reproduction-may also be flexible. Using butterflies as a system, this work tests whether allocation of resources changes as a result of experiences in "difficult" environments that require more investment in learning. We contrast allocation of resources among butterflies with experience in environments that vary in the need for learning. Butterflies with experience searching for novel (i.e., red) hosts, or searching in complex non-host environments, allocate more resources (protein and carbohydrate reserves) to their own flight muscle. In addition, butterflies with experience in these more difficult environments allocate more resources per individual offspring (i.e., egg size and/or lipid reserves). This results in a mother's experience having significant effects on the growth of her offspring (i.e., dry mass and wing length). A separate study showed this re-allocation of resources comes at the expense of lifetime fecundity. These results suggest that investment in learning, and associated changes in life history, can be adjusted depending on an individual's current need, and their offspring's future needs, for learning.

  7. Lessons Learned From 104 Years of Mobile Observatories

    Science.gov (United States)

    Miller, S. P.; Clark, P. D.; Neiswender, C.; Raymond, L.; Rioux, M.; Norton, C.; Detrick, R.; Helly, J.; Sutton, D.; Weatherford, J.

    2007-12-01

    As the oceanographic community ventures into a new era of integrated observatories, it may be helpful to look back on the era of "mobile observatories" to see what Cyberinfrastructure lessons might be learned. For example, SIO has been operating research vessels for 104 years, supporting a wide range of disciplines: marine geology and geophysics, physical oceanography, geochemistry, biology, seismology, ecology, fisheries, and acoustics. In the last 6 years progress has been made with diverse data types, formats and media, resulting in a fully-searchable online SIOExplorer Digital Library of more than 800 cruises (http://SIOExplorer.ucsd.edu). Public access to SIOExplorer is considerable, with 795,351 files (206 GB) downloaded last year. During the last 3 years the efforts have been extended to WHOI, with a "Multi-Institution Testbed for Scalable Digital Archiving" funded by the Library of Congress and NSF (IIS 0455998). The project has created a prototype digital library of data from both institutions, including cruises, Alvin submersible dives, and ROVs. In the process, the team encountered technical and cultural issues that will be facing the observatory community in the near future. Technological Lessons Learned: Shipboard data from multiple institutions are extraordinarily diverse, and provide a good training ground for observatories. Data are gathered from a wide range of authorities, laboratories, servers and media, with little documentation. Conflicting versions exist, generated by alternative processes. Domain- and institution-specific issues were addressed during initial staging. Data files were categorized and metadata harvested with automated procedures. With our second-generation approach to staging, we achieve higher levels of automation with greater use of controlled vocabularies. Database and XML- based procedures deal with the diversity of raw metadata values and map them to agreed-upon standard values, in collaboration with the Marine Metadata

  8. Application of ICT-based Learning Resources for University Inorganic Chemistry Course Training

    Directory of Open Access Journals (Sweden)

    Tatyana M. Derkach

    2013-01-01

    Full Text Available The article studies expediency and efficiency of various ICT-based learning resources use in university inorganic chemistry course training, detects difference of attitudes toward electronic resources between students and faculty members, which create the background for their efficiency loss

  9. E-learning in medical education in resource constrained low- and middle-income countries.

    Science.gov (United States)

    Frehywot, Seble; Vovides, Yianna; Talib, Zohray; Mikhail, Nadia; Ross, Heather; Wohltjen, Hannah; Bedada, Selam; Korhumel, Kristine; Koumare, Abdel Karim; Scott, James

    2013-02-04

    In the face of severe faculty shortages in resource-constrained countries, medical schools look to e-learning for improved access to medical education. This paper summarizes the literature on e-learning in low- and middle-income countries (LMIC), and presents the spectrum of tools and strategies used. Researchers reviewed literature using terms related to e-learning and pre-service education of health professionals in LMIC. Search terms were connected using the Boolean Operators "AND" and "OR" to capture all relevant article suggestions. Using standard decision criteria, reviewers narrowed the article suggestions to a final 124 relevant articles. Of the relevant articles found, most referred to e-learning in Brazil (14 articles), India (14), Egypt (10) and South Africa (10). While e-learning has been used by a variety of health workers in LMICs, the majority (58%) reported on physician training, while 24% focused on nursing, pharmacy and dentistry training. Although reasons for investing in e-learning varied, expanded access to education was at the core of e-learning implementation which included providing supplementary tools to support faculty in their teaching, expanding the pool of faculty by connecting to partner and/or community teaching sites, and sharing of digital resources for use by students. E-learning in medical education takes many forms. Blended learning approaches were the most common methodology presented (49 articles) of which computer-assisted learning (CAL) comprised the majority (45 articles). Other approaches included simulations and the use of multimedia software (20 articles), web-based learning (14 articles), and eTutor/eMentor programs (3 articles). Of the 69 articles that evaluated the effectiveness of e-learning tools, 35 studies compared outcomes between e-learning and other approaches, while 34 studies qualitatively analyzed student and faculty attitudes toward e-learning modalities. E-learning in medical education is a means to an end

  10. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  11. Why is data sharing in collaborative natural resource efforts so hard and what can we do to improve it?

    Science.gov (United States)

    Volk, Carol J; Lucero, Yasmin; Barnas, Katie

    2014-05-01

    Increasingly, research and management in natural resource science rely on very large datasets compiled from multiple sources. While it is generally good to have more data, utilizing large, complex datasets has introduced challenges in data sharing, especially for collaborating researchers in disparate locations ("distributed research teams"). We surveyed natural resource scientists about common data-sharing problems. The major issues identified by our survey respondents (n = 118) when providing data were lack of clarity in the data request (including format of data requested). When receiving data, survey respondents reported various insufficiencies in documentation describing the data (e.g., no data collection description/no protocol, data aggregated, or summarized without explanation). Since metadata, or "information about the data," is a central obstacle in efficient data handling, we suggest documenting metadata through data dictionaries, protocols, read-me files, explicit null value documentation, and process metadata as essential to any large-scale research program. We advocate for all researchers, but especially those involved in distributed teams to alleviate these problems with the use of several readily available communication strategies including the use of organizational charts to define roles, data flow diagrams to outline procedures and timelines, and data update cycles to guide data-handling expectations. In particular, we argue that distributed research teams magnify data-sharing challenges making data management training even more crucial for natural resource scientists. If natural resource scientists fail to overcome communication and metadata documentation issues, then negative data-sharing experiences will likely continue to undermine the success of many large-scale collaborative projects.

  12. Why is Data Sharing in Collaborative Natural Resource Efforts so Hard and What can We Do to Improve it?

    Science.gov (United States)

    Volk, Carol J.; Lucero, Yasmin; Barnas, Katie

    2014-05-01

    Increasingly, research and management in natural resource science rely on very large datasets compiled from multiple sources. While it is generally good to have more data, utilizing large, complex datasets has introduced challenges in data sharing, especially for collaborating researchers in disparate locations ("distributed research teams"). We surveyed natural resource scientists about common data-sharing problems. The major issues identified by our survey respondents ( n = 118) when providing data were lack of clarity in the data request (including format of data requested). When receiving data, survey respondents reported various insufficiencies in documentation describing the data (e.g., no data collection description/no protocol, data aggregated, or summarized without explanation). Since metadata, or "information about the data," is a central obstacle in efficient data handling, we suggest documenting metadata through data dictionaries, protocols, read-me files, explicit null value documentation, and process metadata as essential to any large-scale research program. We advocate for all researchers, but especially those involved in distributed teams to alleviate these problems with the use of several readily available communication strategies including the use of organizational charts to define roles, data flow diagrams to outline procedures and timelines, and data update cycles to guide data-handling expectations. In particular, we argue that distributed research teams magnify data-sharing challenges making data management training even more crucial for natural resource scientists. If natural resource scientists fail to overcome communication and metadata documentation issues, then negative data-sharing experiences will likely continue to undermine the success of many large-scale collaborative projects.

  13. Social media as an open-learning resource in medical education: current perspectives

    Directory of Open Access Journals (Sweden)

    Sutherland S

    2017-06-01

    Full Text Available S Sutherland,1 A Jalali2 1Department of Critical Care, The Ottawa Hospital, ²Division of Clinical and Functional Anatomy, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada Purpose: Numerous studies evaluate the use of social media as an open-learning resource in education, but there is a little published knowledge of empirical evidence that such open-learning resources produce educative outcomes, particularly with regard to student performance. This study undertook a systematic review of the published literature in medical education to determine the state of the evidence as to empirical studies that conduct an evaluation or research regarding social media and open-learning resources.Methods: The authors searched MEDLINE, ERIC, Embase, PubMed, Scopus, and Google Scholar from 2012 to 2017. This search included using keywords related to social media, medical education, research, and evaluation, while restricting the search to peer reviewed, English language articles only. To meet inclusion criteria, manuscripts had to employ evaluative methods and undertake empirical research.Results: Empirical work designed to evaluate the impact of social media as an open-learning resource in medical education is limited as only 13 studies met inclusion criteria. The majority of these studies used undergraduate medical education as the backdrop to investigate open-learning resources, such as Facebook, Twitter, and YouTube. YouTube appears to have little educational value due to the unsupervised nature of content added on a daily basis. Overall, extant reviews have demonstrated that we know a considerable amount about social media use, although to date, its impacts remain unclear.Conclusion: There is a paucity of outcome-based, empirical studies assessing the impact of social media in medical education. The few empirical studies identified tend to focus on evaluating the affective outcomes of social media and medical education as opposed to

  14. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    Science.gov (United States)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  15. Development of an open metadata schema for prospective clinical research (openPCR) in China.

    Science.gov (United States)

    Xu, W; Guan, Z; Sun, J; Wang, Z; Geng, Y

    2014-01-01

    In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection. Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed. The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Standard of China (Chinese EHR Standard). Metadata attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration. OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related standards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data

  16. Crowd-sourced BMS point matching and metadata maintenance with Babel

    DEFF Research Database (Denmark)

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2016-01-01

    Cyber-physical applications, deployed on top of Building Management Systems (BMS), promise energy saving and comfort improvement in non-residential buildings. Such applications are so far mainly deployed as research prototypes. The main roadblock to widespread adoption is the low quality of BMS...... systems. Such applications access sensors and actuators through BMS metadata in form of point labels. The naming of labels is however often inconsistent and incomplete. To tackle this problem, we introduce Babel, a crowd-sourced approach to the creation and maintenance of BMS metadata. In our system...

  17. An OWL Ontology for Metadata of Interactive Learning Objects

    Science.gov (United States)

    Luz, Bruno N.; Santos, Rafael; Alves, Bruno; Areão, Andreza S.; Yokoyama, Marcos H.; Guimarães, Marcelo P.

    2015-01-01

    The main purpose of this paper is to present the importance of Interactive Learning Objects (ILO) to improve the teaching-learning process by assuring a constant interaction among teachers and students, which in turn, allows students to be constantly supported by the teacher. The paper describes the ontology that defines the ILO available on the…

  18. MCM generator: a Java-based tool for generating medical metadata.

    Science.gov (United States)

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.

  19. Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-04-10

    In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume

  20. Atmospheric Radiation Measurement's Data Management Facility captures metadata and uses visualization tools to assist in routine data management.

    Science.gov (United States)

    Keck, N. N.; Macduff, M.; Martin, T.

    2017-12-01

    The Atmospheric Radiation Measurement's (ARM) Data Management Facility (DMF) plays a critical support role in processing and curating data generated by the Department of Energy's ARM Program. Data are collected near real time from hundreds of observational instruments spread out all over the globe. Data are then ingested hourly to provide time series data in NetCDF (network Common Data Format) and includes standardized metadata. Based on automated processes and a variety of user reviews the data may need to be reprocessed. Final data sets are then stored and accessed by users through the ARM Archive. Over the course of 20 years, a suite of data visualization tools have been developed to facilitate the operational processes to manage and maintain the more than 18,000 real time events, that move 1.3 TB of data each day through the various stages of the DMF's data system. This poster will present the resources and methodology used to capture metadata and the tools that assist in routine data management and discoverability.

  1. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  2. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  3. Inconsistencies between Academic E-Book Platforms: A Comparison of Metadata and Search Results

    Science.gov (United States)

    Wiersma, Gabrielle; Tovstiadi, Esta

    2017-01-01

    This article presents the results of a study of academic e-books that compared the metadata and search results from major academic e-book platforms. The authors collected data and performed a series of test searches designed to produce the same result regardless of platform. Testing, however, revealed metadata-related errors and significant…

  4. PERANCANGAN SISTEM METADATA UNTUK DATA WAREHOUSE DENGAN STUDI KASUS REVENUE TRACKING PADA PT. TELKOM DIVRE V JAWA TIMUR

    Directory of Open Access Journals (Sweden)

    Yudhi Purwananto

    2004-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Data warehouse merupakan media penyimpanan data dalam perusahaan yang diambil dari berbagai sistem dan dapat digunakan untuk berbagai keperluan seperti analisis dan pelaporan. Di PT Telkom Divre V Jawa Timur telah dibangun sebuah data warehouse yang disebut dengan Regional Database. Di Regional Database memerlukan sebuah komponen penting dalam data warehouse yaitu metadata. Definisi metadata secara sederhana adalah "data tentang data". Dalam penelitian ini dirancang sistem metadata dengan studi kasus Revenue Tracking sebagai komponen analisis dan pelaporan pada Regional Database. Metadata sangat perlu digunakan dalam pengelolaan dan memberikan informasi tentang data warehouse. Proses - proses di dalam data warehouse serta komponen - komponen yang berkaitan dengan data warehouse harus saling terintegrasi untuk mewujudkan karakteristik data warehouse yang subject-oriented, integrated, time-variant, dan non-volatile. Karena itu metadata juga harus memiliki kemampuan mempertukarkan informasi (exchange antar komponen dalam data warehouse tersebut. Web service digunakan sebagai mekanisme pertukaran ini. Web service menggunakan teknologi XML dan protokol HTTP dalam berkomunikasi. Dengan web service, setiap komponen

  5. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  6. Optional Anatomy and Physiology e-Learning Resources: Student Access, Learning Approaches, and Academic Outcomes

    Science.gov (United States)

    Guy, Richard,; Byrne, Bruce; Dobos, Marian

    2018-01-01

    Anatomy and physiology interactive video clips were introduced into a blended learning environment, as an optional resource, and were accessed by ~50% of the cohort. Student feedback indicated that clips were engaging, assisted understanding of course content, and provided lecture support. Students could also access two other optional online…

  7. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    Science.gov (United States)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  8. Digital Media for STEM Learning: Developing scientific practice skills in the K-12 STEM classroom with resources from WGBH and PBS LearningMedia

    Science.gov (United States)

    Foster, J.; Connolly, R.

    2017-12-01

    WGBH's "Bringing the Universe to America's Classrooms" project is a 5-year effort to design, produce and evaluate digital media tools and resources that support scientific practice skills in diverse K-12 learners. Resources leverage data and content from NASA and WGBH signature programs, like NOVA, into sound instructional experiences that provide K-12 STEM teachers with free, quality resources for teaching topics in the Earth and Space Sciences. Resources address the content and practices in the new K-12 Framework for Science Education and are aligned with the NGSS. Participants will learn about design strategies, findings from our evaluation efforts, and how to access free resources on PBS LearningMedia.

  9. Introducing the ICF: the development of an online resource to support learning, teaching and curriculum design.

    Science.gov (United States)

    Jones, Lester E

    2011-03-01

    The International Classification of Functioning, Disability and Health (ICF) was adopted as one of the key models to support early health professional learning across a suite of new preregistration health science courses. It was decided that an online resource should be developed to enable students, course designers and teaching staff, across all disciplines, to have access to the same definitions, government policies and other supporting information on disability. As part of the comprehensive curriculum review, enquiry-based learning was adopted as the educational approach. Enquiry-based learning promotes deeper learning by encouraging students to engage in authentic challenges. As such, it was important that the online resource was not merely a site for accessing content, but enabled students to make decisions about where else to explore for credible information about the ICF. The selection of a host location that all students and staff could access meant that the resource could not be located in the existing online learning management system. Construction using software being trialled by the library at La Trobe University allowed for the required access, as well as alignment with an enquiry-based learning approach. Consultation for the content of the online resource included formal and informal working groups on curriculum review. The published version included resources from the World Health Organization, examples of research completed within different disciplines, a test of knowledge and a preformatted search page. The format of the online resource allows for updating of information, and feedback on the utilisation of the software has been used to enhance the student experience. The key issues for the development of this online resource were accessibility for students and staff, alignment with the adopted educational approach, consultation with all disciplines, and ease of modification of information and format once published. Copyright © 2010 Chartered

  10. E-learning in medical education in resource constrained low- and middle-income countries

    Science.gov (United States)

    2013-01-01

    Background In the face of severe faculty shortages in resource-constrained countries, medical schools look to e-learning for improved access to medical education. This paper summarizes the literature on e-learning in low- and middle-income countries (LMIC), and presents the spectrum of tools and strategies used. Methods Researchers reviewed literature using terms related to e-learning and pre-service education of health professionals in LMIC. Search terms were connected using the Boolean Operators “AND” and “OR” to capture all relevant article suggestions. Using standard decision criteria, reviewers narrowed the article suggestions to a final 124 relevant articles. Results Of the relevant articles found, most referred to e-learning in Brazil (14 articles), India (14), Egypt (10) and South Africa (10). While e-learning has been used by a variety of health workers in LMICs, the majority (58%) reported on physician training, while 24% focused on nursing, pharmacy and dentistry training. Although reasons for investing in e-learning varied, expanded access to education was at the core of e-learning implementation which included providing supplementary tools to support faculty in their teaching, expanding the pool of faculty by connecting to partner and/or community teaching sites, and sharing of digital resources for use by students. E-learning in medical education takes many forms. Blended learning approaches were the most common methodology presented (49 articles) of which computer-assisted learning (CAL) comprised the majority (45 articles). Other approaches included simulations and the use of multimedia software (20 articles), web-based learning (14 articles), and eTutor/eMentor programs (3 articles). Of the 69 articles that evaluated the effectiveness of e-learning tools, 35 studies compared outcomes between e-learning and other approaches, while 34 studies qualitatively analyzed student and faculty attitudes toward e-learning modalities. Conclusions E-learning

  11. E-learning in medical education in resource constrained low- and middle-income countries

    Directory of Open Access Journals (Sweden)

    Frehywot Seble

    2013-02-01

    Full Text Available Abstract Background In the face of severe faculty shortages in resource-constrained countries, medical schools look to e-learning for improved access to medical education. This paper summarizes the literature on e-learning in low- and middle-income countries (LMIC, and presents the spectrum of tools and strategies used. Methods Researchers reviewed literature using terms related to e-learning and pre-service education of health professionals in LMIC. Search terms were connected using the Boolean Operators “AND” and “OR” to capture all relevant article suggestions. Using standard decision criteria, reviewers narrowed the article suggestions to a final 124 relevant articles. Results Of the relevant articles found, most referred to e-learning in Brazil (14 articles, India (14, Egypt (10 and South Africa (10. While e-learning has been used by a variety of health workers in LMICs, the majority (58% reported on physician training, while 24% focused on nursing, pharmacy and dentistry training. Although reasons for investing in e-learning varied, expanded access to education was at the core of e-learning implementation which included providing supplementary tools to support faculty in their teaching, expanding the pool of faculty by connecting to partner and/or community teaching sites, and sharing of digital resources for use by students. E-learning in medical education takes many forms. Blended learning approaches were the most common methodology presented (49 articles of which computer-assisted learning (CAL comprised the majority (45 articles. Other approaches included simulations and the use of multimedia software (20 articles, web-based learning (14 articles, and eTutor/eMentor programs (3 articles. Of the 69 articles that evaluated the effectiveness of e-learning tools, 35 studies compared outcomes between e-learning and other approaches, while 34 studies qualitatively analyzed student and faculty attitudes toward e-learning modalities

  12. Enhancing Teaching and Learning Wi-Fi Networking Using Limited Resources to Undergraduates

    Science.gov (United States)

    Sarkar, Nurul I.

    2013-01-01

    Motivating students to learn Wi-Fi (wireless fidelity) wireless networking to undergraduate students is often difficult because many students find the subject rather technical and abstract when presented in traditional lecture format. This paper focuses on the teaching and learning aspects of Wi-Fi networking using limited hardware resources. It…

  13. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  14. Researching into Learning Resources in Colleges and Universities. The Practical Research Series.

    Science.gov (United States)

    Higgins, Chris; Reading, Judy; Taylor, Paul

    This book examines issues and methods for conducting research into the educational resource environment in colleges and universities. That environment is defined as whatever is used to facilitate the learning process, including learning space, support staff, and teaching staff. Chapter 1 is an introduction to the series and lays out the process of…

  15. Domain-specific markup languages and descriptive metadata: their functions in scientific resource discoveryLinguagens de marcação específicas por domínio e metadados descritivos: funções para a descoberta de recursos científicos

    Directory of Open Access Journals (Sweden)

    Marcia Lei Zeng

    2010-01-01

    Full Text Available While metadata has been a strong focus within information professionals‟ publications, projects, and initiatives during the last two decades, a significant number of domain-specific markup languages have also been developing on a parallel path at the same rate as metadata standards; yet, they do not receive comparable attention. This essay discusses the functions of these two kinds of approaches in scientific resource discovery and points out their potential complementary roles through appropriate interoperability approaches.Enquanto os metadados tiveram grande foco em publicações, projetos e iniciativas dos profissionais da informação durante as últimas duas décadas, um número significativo de linguagens de marcação específicas por domínio também se desenvolveram paralelamente a uma taxa equivalente aos padrões de metadados; mas ainda não recebem atenção comparável. Esse artigo discute as funções desses dois tipos de abordagens na descoberta de recursos científicos e aponta papéis potenciais e complementares por meio de abordagens de interoperabilidade apropriadas.

  16. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  17. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Science.gov (United States)

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  18. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Tatiparthi B. K. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Thomas, Alex D. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Stamatis, Dimitri [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bertsch, Jon [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Isbandi, Michelle [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Jansson, Jakob [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Mallajosyula, Jyothi [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Pagani, Ioanna [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lobos, Elizabeth A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Kyrpides, Nikos C. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); King Abdulaziz Univ., Jeddah (Saudi Arabia)

    2014-10-27

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.

  19. Learning on human resources management in the radiology residency program

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Aparecido Ferreira de; Lederman, Henrique Manoel; Batista, Nildo Alves, E-mail: aparecidoliveira@ig.com.br [Universidade Federal de Sao Paulo (EPM/UNIFESP), Sao Paulo, SP (Brazil). Escola Paulista de Medicina

    2014-03-15

    Objective: to investigate the process of learning on human resource management in the radiology residency program at Escola Paulista de Medicina - Universidade Federal de Sao Paulo, aiming at improving radiologists' education. Materials and methods: exploratory study with a quantitative and qualitative approach developed with the faculty staff, preceptors and residents of the program, utilizing a Likert questionnaire (46), taped interviews (18), and categorization based on thematic analysis. Results: According to 71% of the participants, residents have clarity about their role in the development of their activities, and 48% said that residents have no opportunity to learn how to manage their work in a multidisciplinary team. Conclusion: Isolation at medical records room, little interactivity between sectors with diversified and fixed activities, absence of a previous culture and lack of a training program on human resources management may interfere in the development of skills for the residents' practice. There is a need to review objectives of the medical residency in the field of radiology, incorporating, whenever possible, the commitment to the training of skills related to human resources management thus widening the scope of abilities of the future radiologists. (author)

  20. Learning on human resources management in the radiology residency program

    International Nuclear Information System (INIS)

    Oliveira, Aparecido Ferreira de; Lederman, Henrique Manoel; Batista, Nildo Alves

    2014-01-01

    Objective: to investigate the process of learning on human resource management in the radiology residency program at Escola Paulista de Medicina - Universidade Federal de Sao Paulo, aiming at improving radiologists' education. Materials and methods: exploratory study with a quantitative and qualitative approach developed with the faculty staff, preceptors and residents of the program, utilizing a Likert questionnaire (46), taped interviews (18), and categorization based on thematic analysis. Results: According to 71% of the participants, residents have clarity about their role in the development of their activities, and 48% said that residents have no opportunity to learn how to manage their work in a multidisciplinary team. Conclusion: Isolation at medical records room, little interactivity between sectors with diversified and fixed activities, absence of a previous culture and lack of a training program on human resources management may interfere in the development of skills for the residents' practice. There is a need to review objectives of the medical residency in the field of radiology, incorporating, whenever possible, the commitment to the training of skills related to human resources management thus widening the scope of abilities of the future radiologists. (author)

  1. Metadata and their impact on processes in Building Information Modeling

    Directory of Open Access Journals (Sweden)

    Vladimir Nyvlt

    2014-04-01

    Full Text Available Building Information Modeling (BIM itself contains huge potential, how to increase effectiveness of every project in its all life cycle. It means from initial investment plan through project and building-up activities to long-term usage and property maintenance and finally demolition. Knowledge Management or better say Knowledge Sharing covers two sets of tools, managerial and technological. Manager`s needs are real expectations and desires of final users in terms of how could they benefit from managing long-term projects, covering whole life cycle in terms of sparing investment money and other resources. Technology employed can help BIM processes to support and deliver these benefits to users. How to use this technology for data and metadata collection, storage and sharing, which processes may these new technologies deploy. We will touch how to cover optimized processes proposal for better and smooth support of knowledge sharing within project time-scale, and covering all its life cycle.

  2. Structure constrained by metadata in networks of chess players.

    Science.gov (United States)

    Almeira, Nahuel; Schaigorodsky, Ana L; Perotti, Juan I; Billoni, Orlando V

    2017-11-09

    Chess is an emblematic sport that stands out because of its age, popularity and complexity. It has served to study human behavior from the perspective of a wide number of disciplines, from cognitive skills such as memory and learning, to aspects like innovation and decision-making. Given that an extensive documentation of chess games played throughout history is available, it is possible to perform detailed and statistically significant studies about this sport. Here we use one of the most extensive chess databases in the world to construct two networks of chess players. One of the networks includes games that were played over-the-board and the other contains games played on the Internet. We study the main topological characteristics of the networks, such as degree distribution and correlations, transitivity and community structure. We complement the structural analysis by incorporating players' level of play as node metadata. Although both networks are topologically different, we show that in both cases players gather in communities according to their expertise and that an emergent rich-club structure, composed by the top-rated players, is also present.

  3. Resource-Based Learning and Class Organisation for Adult EFL Learners.

    Science.gov (United States)

    Gewirtz, Agatha

    1979-01-01

    A list is presented of special factors pertaining to English as a foreign language class in England that provide strong arguments for organizing them along resource-based learning situations. Students can be in control of their studies, engaging in independent, individual work. (SW)

  4. A palliative care resource for professional carers of people with learning disabilities.

    Science.gov (United States)

    Reddall, C

    2010-07-01

    People with learning disabilities who have a life-threatening illness, are as entitled as other members of the population to receive good palliative care in their home of choice. However, professional carers of people with learning disability are generally unaware of the meaning of palliative care, and how they can access palliative care support. More importantly, they may feel they are not capable of caring for a resident with a life-threatening illness in the home environment. This article uses a case study to help illustrate the value of compiling a resource booklet for professional carers of people with learning disabilities. By providing information on palliative care, that is easy to understand and easily accessible, professional carers of these people can have a valuable resource which will enable them to provide general palliative care when needed. (I use the term professional carers to refer to carers who are paid to look after people with learning disabilities either in care homes, or in supported living homes in the general community).

  5. Reflections of health care professionals on e-learning resources for patient safety.

    Science.gov (United States)

    Walsh, Kieran

    2018-01-01

    There is a paucity of evidence on how health care professionals view e-learning as a means of education to achieve safer health care. To address this gap, the reflections of health care professionals who used the resources on BMJ Learning were captured and analyzed. Key themes emerged from the analysis. Health care professionals are keen to put their e-learning into action to achieve safer health care and to learn how to follow guidelines that will help them achieve safer health care. Learners wanted their learning to remain grounded in reality. Finally, many commented that it was difficult for their individual learning to have a real impact when the culture of the organization did not change.

  6. A metadata catalog for organization and systemization of fusion simulation data

    International Nuclear Information System (INIS)

    Greenwald, M.; Fredian, T.; Schissel, D.; Stillerman, J.

    2012-01-01

    Highlights: ► We find that modeling and simulation data need better systemization. ► Workflow, data provenance and relations among data items need to be captured. ► We have begun a design for a simulation metadata catalog that meets these needs. ► The catalog design also supports creation of science notebooks for simulation. - Abstract: Careful management of data and associated metadata is a critical part of any scientific enterprise. Unfortunately, most current fusion simulation efforts lack systematic, project-wide organization of their data. This paper describes an approach to managing simulation data through creation of a comprehensive metadata catalog, currently under development. The catalog is intended to document all past and current simulation activities (including data provenance); to enable global data location and to facilitate data access, analysis and visualization through uniform provision of metadata. The catalog will capture workflow, holding entries for each simulation activity including, at least, data importing and staging, data pre-processing and input preparation, code execution, data storage, post-processing and exporting. The overall aim is that between the catalog and the main data archive, the system would hold a complete and accessible description of the data, all of its attributes and the processes used to generate the data. The catalog will describe data collections, including those representing simulation workflows as well as any other useful groupings. Finally it would be populated with user supplied comments to explain the motivation and results of any activity documented by the catalog.

  7. Use of Online Learning Resources in the Development of Learning Environments at the Intersection of Formal and Informal Learning: The Student as Autonomous Designer

    Directory of Open Access Journals (Sweden)

    Maja Lebeničnik

    2015-06-01

    Full Text Available Learning resources that are used in the education of university students are often available online. The nature of new technologies causes an interweaving of formal and informal learning, with the result that a more active role is expected from students with regard to the use of ICT for their learning. The variety of online learning resources (learning content and learning tools facilitates informed use and enables students to create the learning environment that is most appropriate for their personal learning needs and preferences. In contemporary society, the creation of an inclusive learning environment supported by ICT is pervasive. The model of Universal Design for Learning is becoming increasingly significant in responding to the need for inclusive learning environments. In this article, we categorize different online learning activities into the principles of Universal Design for Learning. This study examines ICT use among university students (N = 138, comparing student teachers with students in other study programs. The findings indicate that among all students, activities with lower demands for engagement are most common. Some differences were observed between student teachers and students from other programs. Student teachers were more likely than their peers to perform certain activities aimed at meeting diverse learner needs, but the percentage of students performing more advanced activities was higher for students in other study programs than for student teachers. The categorization of activities revealed that student teachers are less likely to undertake activities that involve interaction with others. Among the sample of student teachers, we found that personal innovativeness is correlated with diversity of activities in only one category. The results show that student teachers should be encouraged to perform more advanced activities, especially activities involving interaction with others, collaborative learning and use of ICT to

  8. Data Bookkeeping Service 3 - Providing event metadata in CMS

    CERN Document Server

    Giffels, Manuel; Riley, Daniel

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about $200,000$ datasets and more than $40$ million files, which adds up in around $700$ GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems, all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  9. Data Bookkeeping Service 3 - Providing Event Metadata in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Giffels, Manuel [CERN; Guo, Y. [Fermilab; Riley, Daniel [Cornell U.

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about 200, 000 datasets and more than 40 million files, which adds up in around 700 GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems [1], all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  10. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    CERN Document Server

    Lambert, Fabian; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring system and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand. Moreover, we describe how to switch to a distant replica in case of downtime.

  11. Criteria and foundations for the implementation of the Learning Resource Centers

    OpenAIRE

    Raquel Zamora Fonseca

    2013-01-01

    Review the criteria and rationale basis for the implementation of research - library and learning resource centers. The analysis focused on the implementation of CRAIs in university libraries and organizational models that can take.

  12. The use of interactive graphical maps for browsing medical/health Internet information resources

    Directory of Open Access Journals (Sweden)

    Boulos Maged

    2003-01-01

    Full Text Available Abstract As online information portals accumulate metadata descriptions of Web resources, it becomes necessary to develop effective ways for visualising and navigating the resultant huge metadata repositories as well as the different semantic relationships and attributes of described Web resources. Graphical maps provide a good method to visualise, understand and navigate a world that is too large and complex to be seen directly like the Web. Several examples of maps designed as a navigational aid for Web resources are presented in this review with an emphasis on maps of medical and health-related resources. The latter include HealthCyberMap maps http://healthcybermap.semanticweb.org/, which can be classified as conceptual information space maps, and the very abstract and geometric Visual Net maps of PubMed http://map.net (for demos. Information resources can be also organised and navigated based on their geographic attributes. Some of the maps presented in this review use a Kohonen Self-Organising Map algorithm, and only HealthCyberMap uses a Geographic Information System to classify Web resource data and render the maps. Maps based on familiar metaphors taken from users' everyday life are much easier to understand. Associative and pictorial map icons that enable instant recognition and comprehension are preferred to geometric ones and are key to successful maps for browsing medical/health Internet information resources.

  13. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Science.gov (United States)

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  14. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Directory of Open Access Journals (Sweden)

    Min Huang

    2014-10-01

    Full Text Available While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB pages which are more reliable than least significant bit (LSB pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  15. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-11

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unified metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation

  16. Novel Machine Learning-Based Techniques for Efficient Resource Allocation in Next Generation Wireless Networks

    KAUST Repository

    AlQuerm, Ismail A.

    2018-02-21

    There is a large demand for applications of high data rates in wireless networks. These networks are becoming more complex and challenging to manage due to the heterogeneity of users and applications specifically in sophisticated networks such as the upcoming 5G. Energy efficiency in the future 5G network is one of the essential problems that needs consideration due to the interference and heterogeneity of the network topology. Smart resource allocation, environmental adaptivity, user-awareness and energy efficiency are essential features in the future networks. It is important to support these features at different networks topologies with various applications. Cognitive radio has been found to be the paradigm that is able to satisfy the above requirements. It is a very interdisciplinary topic that incorporates flexible system architectures, machine learning, context awareness and cooperative networking. Mitola’s vision about cognitive radio intended to build context-sensitive smart radios that are able to adapt to the wireless environment conditions while maintaining quality of service support for different applications. Artificial intelligence techniques including heuristics algorithms and machine learning are the shining tools that are employed to serve the new vision of cognitive radio. In addition, these techniques show a potential to be utilized in an efficient resource allocation for the upcoming 5G networks’ structures such as heterogeneous multi-tier 5G networks and heterogeneous cloud radio access networks due to their capability to allocate resources according to real-time data analytics. In this thesis, we study cognitive radio from a system point of view focusing closely on architectures, artificial intelligence techniques that can enable intelligent radio resource allocation and efficient radio parameters reconfiguration. We propose a modular cognitive resource management architecture, which facilitates a development of flexible control for

  17. Students' "Uses and Gratification Expectancy" Conceptual Framework in Relation to E-Learning Resources

    Science.gov (United States)

    Mondi, Makingu; Woods, Peter; Rafi, Ahmad

    2007-01-01

    This paper presents the systematic development of a "Uses and Gratification Expectancy" (UGE) conceptual framework which is able to predict students' "Perceived e-Learning Experience." It is argued that students' UGE as regards e-learning resources cannot be implicitly or explicitly explored without first examining underlying communication…

  18. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  19. Parallel file system with metadata distributed across partitioned key-value store c

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  20. Standardizing metadata and taxonomic identification in metabarcoding studies

    NARCIS (Netherlands)

    Tedersoo, Leho; Ramirez, Kelly; Nilsson, R; Kaljuvee, Aivi; Koljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering

  1. Revision of IRIS/IDA Seismic Station Metadata

    Science.gov (United States)

    Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.

    2017-12-01

    Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.

  2. Learned Resourcefulness and the Long-Term Benefits of a Chronic Pain Management Program

    Science.gov (United States)

    Kennett, Deborah J.; O'Hagan, Fergal T.; Cezer, Diego

    2008-01-01

    A concurrent mixed methods approach was used to understand how learned resourcefulness empowers individuals. After completing Rosenbaum's Self-Control Schedule (SCS) measuring resourcefulness, 16 past clients of a multimodal pain clinic were interviewed about the kinds of pain-coping strategies they were practicing from the program. Constant…

  3. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    Science.gov (United States)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  4. The relevance of music information representation metadata from the perspective of expert users

    Directory of Open Access Journals (Sweden)

    Camila Monteiro de Barros

    Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.

  5. Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories.

    Science.gov (United States)

    Neu, Scott C; Crawford, Karen L; Toga, Arthur W

    2012-01-01

    Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.

  6. Students’ Use of Knowledge Resources in Environmental Interaction on an Outdoor Learning Trail

    NARCIS (Netherlands)

    Tan, Esther; So, Hyo-Jeong

    2016-01-01

    This study examined how students leveraged different types of knowledge resources on an outdoor learning trail. We positioned the learning trail as an integral part of the curriculum with a pre- and post-trail phase to scaffold and to support students’ meaning-making process. The study was conducted

  7. Blended learning in dentistry: 3-D resources for inquiry-based learning

    Directory of Open Access Journals (Sweden)

    Susan Bridges

    2012-06-01

    Full Text Available Motivation is an important factor for inquiry-based learning, so creative design of learning resources and materials is critical to enhance students’ motivation and hence their cognition. Modern dentistry is moving towards “electronic patient records” for both clinical treatment and teaching. Study models have long been an essential part of dental records. Traditional plaster casts are, however, among the last type of clinical record in the dental field to be converted into digital media as virtual models. Advantages of virtual models include: simpler storage; reduced risk of damage, disappearance, or misplacement; simpler and effective measuring; and easy transferal to colleagues. In order to support student engagement with the rapidly changing world of digital dentistry, and in order to stimulate the students’ motivation and depth of inquiry, this project aims to introduce virtual models into a Bachelor and Dental Surgery (BDS curriculum. Under a “blended” e-learning philosophy, students are first introduced to the new software then 3-D models are incorporated into inquiry-based problems as stimulus materials. Face-to-face tutorials blend virtual model access via interactive whiteboards (IWBs. Students’ perceptions of virtual models including motivation and cognition as well as the virtual models’ functionality were rated after a workshop introducing virtual models and plaster models in parallel. Initial student feedback indicates that the 3-D models have been generally well accepted, which confirmed the functionality of the programme and the positive perception of virtual models for enhancing students’ learning motivation. Further investigation will be carried out to assess the impact of virtual models on students’ learning outcomes.

  8. Criteria and foundations for the implementation of the Learning Resource Centers

    Directory of Open Access Journals (Sweden)

    Raquel Zamora Fonseca

    2013-03-01

    Full Text Available Review the criteria and rationale basis for the implementation of research - library and learning resource centers. The analysis focused on the implementation of CRAIs in university libraries and organizational models that can take.

  9. Learning Resources Centers and Their Effectiveness on Students’ Learning Outcomes: A Case-Study of an Omani Higher Education Institute

    Directory of Open Access Journals (Sweden)

    Peyman Nouraey

    2017-06-01

    Full Text Available The study aimed at investigating the use and effectiveness of a learning resources center, which is generally known as a library. In doing so, eight elements were investigated through an author-designed questionnaire. Each of these elements tended to delve into certain aspects of the afore-mentioned center. These elements included a students’ visits frequency, b availability of books related to modules, c center facilities, d use of discussion rooms, e use of online resources, f staff cooperation, g impact on knowledge enhancement, and, h recommendation to peers. Eighty undergraduate students participated in the study. Participants were then asked to read the statements carefully and choose one of the five responses provided, ranging from strongly agree to strongly disagree. Data were analyzed based on 5-point Likert Scale. Findings of the study revealed that participants were mostly in agreement with all eight statements provided in the questionnaire, which were interpreted as positive feedbacks from the students. Then, the frequencies of responses by the participants were reported. Finally, the results were compared and contrasted and related discussions on the effectiveness of libraries and learning resources centers on students’ learning performances and outcomes were made.

  10. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  11. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Science.gov (United States)

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  12. A machine learning approach for predicting the relationship between energy resources and economic development

    Science.gov (United States)

    Cogoljević, Dušan; Alizamir, Meysam; Piljan, Ivan; Piljan, Tatjana; Prljić, Katarina; Zimonjić, Stefan

    2018-04-01

    The linkage between energy resources and economic development is a topic of great interest. Research in this area is also motivated by contemporary concerns about global climate change, carbon emissions fluctuating crude oil prices, and the security of energy supply. The purpose of this research is to develop and apply the machine learning approach to predict gross domestic product (GDP) based on the mix of energy resources. Our results indicate that GDP predictive accuracy can be improved slightly by applying a machine learning approach.

  13. Biomedical word sense disambiguation with ontologies and metadata: automation meets accuracy

    Directory of Open Access Journals (Sweden)

    Hakenberg Jörg

    2009-01-01

    Full Text Available Abstract Background Ontology term labels can be ambiguous and have multiple senses. While this is no problem for human annotators, it is a challenge to automated methods, which identify ontology terms in text. Classical approaches to word sense disambiguation use co-occurring words or terms. However, most treat ontologies as simple terminologies, without making use of the ontology structure or the semantic similarity between terms. Another useful source of information for disambiguation are metadata. Here, we systematically compare three approaches to word sense disambiguation, which use ontologies and metadata, respectively. Results The 'Closest Sense' method assumes that the ontology defines multiple senses of the term. It computes the shortest path of co-occurring terms in the document to one of these senses. The 'Term Cooc' method defines a log-odds ratio for co-occurring terms including co-occurrences inferred from the ontology structure. The 'MetaData' approach trains a classifier on metadata. It does not require any ontology, but requires training data, which the other methods do not. To evaluate these approaches we defined a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The 'MetaData' approach performed best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The 'Term Cooc' approach performs better on Gene Ontology (92% success than on MeSH (73% success as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The 'Closest Sense' approach achieves on average 80% success rate. Conclusion Metadata is valuable for disambiguation, but requires high quality training data. Closest Sense requires no training, but a large, consistently modelled ontology, which are two opposing conditions. Term Cooc achieves greater 90

  14. Metadata Quality Improvement : DASISH deliverable 5.2A

    NARCIS (Netherlands)

    L'Hours, Hervé; Offersgaard, Lene; Wittenberg, M.; Wloka, Bartholomäus

    2014-01-01

    The aim of this task was to analyse and compare the different metadata strategies of CLARIN, DARIAH and CESSDA, and to identify possibilities of cross-fertilization to take profit from each other solutions where possible. To have a better understanding in which stages of the research lifecycle

  15. Content-aware network storage system supporting metadata retrieval

    Science.gov (United States)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  16. Conditions and configuration metadata for the ATLAS experiment

    International Nuclear Information System (INIS)

    Gallas, E J; Pachal, K E; Tseng, J C L; Albrand, S; Fulachier, J; Lambert, F; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user reports and browsers.

  17. Conditions and configuration metadata for the ATLAS experiment

    CERN Document Server

    Gallas, E J; Albrand, S; Fulachier, J; Lambert, F; Pachal, K E; Tseng, J C L; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user...

  18. Fostering postgraduate student engagement: online resources supporting self-directed learning in a diverse cohort

    Directory of Open Access Journals (Sweden)

    Luciane V. Mello

    2016-03-01

    Full Text Available The research question for this study was: ‘Can the provision of online resources help to engage and motivate students to become self-directed learners?’ This study presents the results of an action research project to answer this question for a postgraduate module at a research-intensive university in the United Kingdom. The analysis of results from the study was conducted dividing the students according to their programme degree – Masters or PhD – and according to their language skills. The study indicated that the online resources embedded in the module were consistently used, and that the measures put in place to support self-directed learning (SDL were both perceived and valued by the students, irrespective of their programme or native language. Nevertheless, a difference was observed in how students viewed SDL: doctoral students seemed to prefer the approach and were more receptive to it than students pursuing their Masters degree. Some students reported that the SDL activity helped them to achieve more independence than did traditional approaches to teaching. Students who engaged with the online resources were rewarded with higher marks and claimed that they were all the more motivated within the module. Despite the different learning experiences of the diverse cohort, the study found that the blended nature of the course and its resources in support of SDL created a learning environment which positively affected student learning.

  19. Open Educational Resources and Informational Ecosystems: «Edutags» as a connector for open learning

    Directory of Open Access Journals (Sweden)

    Michael Kerres

    2014-10-01

    Full Text Available Teaching and learning in school essentially relies on analogous and digital media, artefacts and tools of all kinds. They are supported and provided by various players. The role of these players for providing learning infrastructures and the interaction between them are discussed in the following paper. Increasingly, Open Educational Resources (OER become available and the question arises how the interaction between these players is impacted. On the one hand, some players implement closed informational ecosystems that might provide a rich and coherent environment for learning, but also lock the users into a defined and often restricted environment. On the other hand, other players are interested in developing an infrastructure that supports open learning without the boundaries of closed informational ecosystems. Such open informational ecosystems must provide interconnections to numerous, in principal, unlimited number of platforms for learning contents. In the context of the project «Edutags» a reference platform is being implemented by way in which the contents of various providers are being connected and enriched through user-generated tags, commentaries and evaluations. The discussion points out that such an independent reference platform, operated separately from content platforms, must be considered as an important element in an open and truly distributed infrastructure for learning resources. Hence, we do not only need open educational resources to support open learning, we also need to establish an open informational ecosystem that supports such approaches.

  20. Developing nursing and midwifery students' capacity for coping with bullying and aggression in clinical settings: Students' evaluation of a learning resource.

    Science.gov (United States)

    Hogan, Rosemarie; Orr, Fiona; Fox, Deborah; Cummins, Allison; Foureur, Maralyn

    2018-03-01

    An innovative blended learning resource for undergraduate nursing and midwifery students was developed in a large urban Australian university, following a number of concerning reports by students on their experiences of bullying and aggression in clinical settings. The blended learning resource included interactive online learning modules, comprising film clips of realistic clinical scenarios, related readings, and reflective questions, followed by in-class role-play practice of effective responses to bullying and aggression. On completion of the blended learning resource 210 participants completed an anonymous survey (65.2% response rate). Qualitative data was collected and a thematic analysis of the participants' responses revealed the following themes: 'Engaging with the blended learning resource'; 'Responding to bullying' and 'Responding to aggression'. We assert that developing nursing and midwifery students' capacity to effectively respond to aggression and bullying, using a self-paced blended learning resource, provides a solution to managing some of the demands of the clinical setting. The blended learning resource, whereby nursing and midwifery students were introduced to realistic portrayals of bullying and aggression in clinical settings, developed their repertoire of effective responding and coping skills for use in their professional practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. What Is the Impact of Online Resource Materials on Student Self-Learning Strategies?

    Science.gov (United States)

    Dowell, David John; Small, Felicity A.

    2011-01-01

    The purpose of this article is to examine how students are incorporating online resources into their self-regulated learning strategies. The process of developing these learning strategies and the importance of these strategies has been widely researched, but there has been little empirical research into how the students are affected by online…

  2. Investigating Student Use and Value of E-Learning Resources to Develop Academic Writing within the Discipline of Environmental Science

    Science.gov (United States)

    Taffs, Kathryn H.; Holt, Julienne I.

    2013-01-01

    The use of information and communication technologies (ICTs) in higher education to support student learning is expanding. However, student usage has been low and the value of e-learning resources has been under investigation. We reflect on best practices for pedagogical design of e-learning resources to support academic writing in environmental…

  3. Integration of extracellular RNA profiling data using metadata, biomedical ontologies and Linked Data technologies

    Directory of Open Access Journals (Sweden)

    Sai Lakshmi Subramanian

    2015-08-01

    Full Text Available The large diversity and volume of extracellular RNA (exRNA data that will form the basis of the exRNA Atlas generated by the Extracellular RNA Communication Consortium pose a substantial data integration challenge. We here present the strategy that is being implemented by the exRNA Data Management and Resource Repository, which employs metadata, biomedical ontologies and Linked Data technologies, such as Resource Description Framework to integrate a diverse set of exRNA profiles into an exRNA Atlas and enable integrative exRNA analysis. We focus on the following three specific data integration tasks: (a selection of samples from a virtual biorepository for exRNA profiling and for inclusion in the exRNA Atlas; (b retrieval of a data slice from the exRNA Atlas for integrative analysis and (c interpretation of exRNA analysis results in the context of pathways and networks. As exRNA profiling gains wide adoption in the research community, we anticipate that the strategies discussed here will increasingly be required to enable data reuse and to facilitate integrative analysis of exRNA data.

  4. Integration of extracellular RNA profiling data using metadata, biomedical ontologies and Linked Data technologies.

    Science.gov (United States)

    Subramanian, Sai Lakshmi; Kitchen, Robert R; Alexander, Roger; Carter, Bob S; Cheung, Kei-Hoi; Laurent, Louise C; Pico, Alexander; Roberts, Lewis R; Roth, Matthew E; Rozowsky, Joel S; Su, Andrew I; Gerstein, Mark B; Milosavljevic, Aleksandar

    2015-01-01

    The large diversity and volume of extracellular RNA (exRNA) data that will form the basis of the exRNA Atlas generated by the Extracellular RNA Communication Consortium pose a substantial data integration challenge. We here present the strategy that is being implemented by the exRNA Data Management and Resource Repository, which employs metadata, biomedical ontologies and Linked Data technologies, such as Resource Description Framework to integrate a diverse set of exRNA profiles into an exRNA Atlas and enable integrative exRNA analysis. We focus on the following three specific data integration tasks: (a) selection of samples from a virtual biorepository for exRNA profiling and for inclusion in the exRNA Atlas; (b) retrieval of a data slice from the exRNA Atlas for integrative analysis and (c) interpretation of exRNA analysis results in the context of pathways and networks. As exRNA profiling gains wide adoption in the research community, we anticipate that the strategies discussed here will increasingly be required to enable data reuse and to facilitate integrative analysis of exRNA data.

  5. Learning about the Human Genome. Part 2: Resources for Science Educators. ERIC Digest.

    Science.gov (United States)

    Haury, David L.

    This ERIC Digest identifies how the human genome project fits into the "National Science Education Standards" and lists Human Genome Project Web sites found on the World Wide Web. It is a resource companion to "Learning about the Human Genome. Part 1: Challenge to Science Educators" (Haury 2001). The Web resources and…

  6. The RISE Framework: Using Learning Analytics to Automatically Identify Open Educational Resources for Continuous Improvement

    Science.gov (United States)

    Bodily, Robert; Nyland, Rob; Wiley, David

    2017-01-01

    The RISE (Resource Inspection, Selection, and Enhancement) Framework is a framework supporting the continuous improvement of open educational resources (OER). The framework is an automated process that identifies learning resources that should be evaluated and either eliminated or improved. This is particularly useful in OER contexts where the…

  7. Multimedia presentation as a form of E-learning resources in the educational process

    Directory of Open Access Journals (Sweden)

    Bizyaev АА

    2017-06-01

    Full Text Available The article describes the features of the use of multimedia presentations as an electronic learning resource in the educational process, reflecting resource requirements; pedagogical goals that may be achieved. Currently one of the main directions in the educational process is the effective use of teaching computers. Pressing issue implementation of information and communication technologies in education is to develop educational resources with the aim to increase the level and quality of education.

  8. Benefits of Record Management For Scientific Writing (Study of Metadata Reception of Zotero Reference Management Software in UIN Malang

    Directory of Open Access Journals (Sweden)

    Moch Fikriansyah Wicaksono

    2018-01-01

    Full Text Available Record creation and management by individuals or organizations grows rapidly, particularly the change from print to electronics, and the smallest part of record (metadata. Therefore, there is a need to perform record management metadata, particularly for students who have the needs of recording references and citation. Reference management software (RMS is a software to help reference management, one of them named zotero. The purpose of this article is to describe the benefits of record management for the writing of scientific papers for students, especially on biology study program in UIN Malik Ibrahim Malang. The type of research used is descriptive with quantitative approach. To increase the depth of respondents' answers, we used additional data by conducting interviews. The selected population is 322 students, class of 2012 to 2014, using random sampling. The selection criteria were chosen because the introduction and use of reference management software, zotero have started since three years ago.  Respondents in this study as many as 80 people, which is obtained from the formula Yamane. The results showed that 70% agreed that using reference management software saved time and energy in managing digital file metadata, 71% agreed that if digital metadata can be quickly stored into RMS, 65% agreed on the ease of storing metadata into the reference management software, 70% agreed when it was easy to configure metadata to quote and bibliography, 56.6% agreed that the metadata stored in reference management software could be edited, 73.8% agreed that using metadata will make it easier to write quotes and bibliography.

  9. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han

    2014-01-01

    Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.

  10. The evolution of chondrichthyan research through a metadata ...

    African Journals Online (AJOL)

    We compiled metadata from Sharks Down Under (1991) and the two Sharks International conferences (2010 and 2014), spanning 23 years. Analysis of the data highlighted taxonomic biases towards charismatic species, a declining number of studies in fundamental science such as those related to taxonomy and basic life ...

  11. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    Science.gov (United States)

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications.

  12. Understanding Motivational System in Open Learning: Learners' Engagement with a Traditional Chinese-Based Open Educational Resource System

    Science.gov (United States)

    Huang, Wenhao David; Wu, Chorng-Guang

    2017-01-01

    Learning has embraced the "open" process in recent years, as many educational resources are made available for free online. Existing research, however, has not provided sufficient evidence to systematically improve open learning interactions and engagement in open educational resource (OER) systems. This deficiency presents two…

  13. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  14. Enhanced machine learning scheme for energy efficient resource allocation in 5G heterogeneous cloud radio access networks

    KAUST Repository

    Alqerm, Ismail

    2018-02-15

    Heterogeneous cloud radio access networks (H-CRAN) is a new trend of 5G that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users\\' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users\\' QoS requirements.

  15. Using Ontologies for the E-learning System in Healthcare Human Resources Management

    Directory of Open Access Journals (Sweden)

    Lidia BAJENARU

    2015-01-01

    Full Text Available This paper provides a model for the use of ontology in e-learning systems for structuring educational content in the domain of healthcare human resources management (HHRM in Romania. In this respect we propose an effective method to improve the learning system by providing personalized learning paths created using ontology and advanced educational strategies to provide a personalized learning content for the medical staff. Personalization of e-learning process for the chosen target group will be achieved by setting up learning path for each user according to his profile. This will become possible using: domain ontology, learning objects, modeling student knowledge. Developing an ontology-based system for competence management allows complex interactions, providing intelligent interfacing. This is a new approach for the healthcare system managers in permanent training based on e-learning technologies and specific ontologies in a complex area that needs urgent modernization and efficiency to meet the public health economic, social and political context of Romania.

  16. Sophisticated Online Learning Scheme for Green Resource Allocation in 5G Heterogeneous Cloud Radio Access Networks

    KAUST Repository

    Alqerm, Ismail

    2018-01-23

    5G is the upcoming evolution for the current cellular networks that aims at satisfying the future demand for data services. Heterogeneous cloud radio access networks (H-CRANs) are envisioned as a new trend of 5G that exploits the advantages of heterogeneous and cloud radio access networks to enhance spectral and energy efficiency. Remote radio heads (RRHs) are small cells utilized to provide high data rates for users with high quality of service (QoS) requirements, while high power macro base station (BS) is deployed for coverage maintenance and low QoS users service. Inter-tier interference between macro BSs and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRANs. Therefore, we propose an efficient resource allocation scheme using online learning, which mitigates interference and maximizes energy efficiency while maintaining QoS requirements for all users. The resource allocation includes resource blocks (RBs) and power. The proposed scheme is implemented using two approaches: centralized, where the resource allocation is processed at a controller integrated with the baseband processing unit and decentralized, where macro BSs cooperate to achieve optimal resource allocation strategy. To foster the performance of such sophisticated scheme with a model free learning, we consider users\\' priority in RB allocation and compact state representation learning methodology to improve the speed of convergence and account for the curse of dimensionality during the learning process. The proposed scheme including both approaches is implemented using software defined radios testbed. The obtained results and simulation results confirm that the proposed resource allocation solution in H-CRANs increases the energy efficiency significantly and maintains users\\' QoS.

  17. A review of assertions about the processes and outcomes of social learning in natural resource management.

    Science.gov (United States)

    Cundill, G; Rodela, R

    2012-12-30

    Social learning has become a central theme in natural resource management. This growing interest is underpinned by a number of assertions about the outcomes of social learning, and about the processes that support these outcomes. Yet researchers and practitioners who seek to engage with social learning through the natural resource management literature often become disorientated by the myriad processes and outcomes that are identified. We trace the roots of current assertions about the processes and outcomes of social learning in natural resource management, and assess the extent to which there is an emerging consensus on these assertions. Results suggest that, on the one hand, social learning is described as taking place through deliberative interactions amongst multiple stakeholders. During these interactions, it is argued that participants learn to work together and build relationships that allow for collective action. On the other hand, social learning is described as occurring through deliberate experimentation and reflective practice. During these iterative cycles of action, monitoring and reflection, participants learn how to cope with uncertainty when managing complex systems. Both of these processes, and their associated outcomes, are referred to as social learning. Where, therefore, should researchers and practitioners focus their attention? Results suggest that there is an emerging consensus that processes that support social learning involve sustained interaction between stakeholders, on-going deliberation and the sharing of knowledge in a trusting environment. There is also an emerging consensus that the key outcome of such learning is improved decision making underpinned by a growing awareness of human-environment interactions, better relationships and improved problem-solving capacities for participants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A Flexible Online Metadata Editing and Management System

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Raul [Arizona State University; Pan, Jerry Yun [ORNL; Gries, Corinna [Arizona State University; Inigo, Gil San [University of New Mexico, Albuquerque; Palanisamy, Giri [ORNL

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fields user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to

  19. Competitive debate classroom as a cooperative learning technique for the human resources subject

    Directory of Open Access Journals (Sweden)

    Guillermo A. SANCHEZ PRIETO

    2018-01-01

    Full Text Available The paper shows an academic debate model as a cooperative learning technique for teaching human resources at University. The general objective of this paper is to conclude if academic debate can be included in the category of cooperative learning. The Specific objective it is presenting a model to implement this technique. Thus the first part of the paper shows the concept of cooperative learning and its main characteristics. The second part presents the debate model believed to be labelled as cooperative learning. Last part concludes with the characteristics of the model that match different aspects or not of the cooperative learning.

  20. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    Science.gov (United States)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  1. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery

  2. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar

    2015-01-01

    Full Text Available The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments. Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA, Biosample, Bioprojects, and Gene Expression Omnibus (GEO. Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe “MetaRNA-Seq,” a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  3. Identifying and evaluating electronic learning resources for use in adult-gerontology nurse practitioner education.

    Science.gov (United States)

    Thompson, Hilaire J; Belza, Basia; Baker, Margaret; Christianson, Phyllis; Doorenbos, Ardith; Nguyen, Huong

    2014-01-01

    Enhancing existing curricula to meet newly published adult-gerontology advanced practice registered nurse (APRN) competencies in an efficient manner presents a challenge to nurse educators. Incorporating shared, published electronic learning resources (ELRs) in existing or new courses may be appropriate in order to assist students in achieving competencies. The purposes of this project were to (a) identify relevant available ELR for use in enhancing geriatric APRN education and (b) to evaluate the educational utility of identified ELRs based on established criteria. A multilevel search strategy was used. Two independent team members reviewed identified ELR against established criteria to ensure utility. Only resources meeting all criteria were retained. Resources were found for each of the competency areas and included formats such as podcasts, Web casts, case studies, and teaching videos. In many cases, resources were identified using supplemental strategies and not through traditional search or search of existing geriatric repositories. Resources identified have been useful to advanced practice educators in improving lecture and seminar content in a particular topic area and providing students and preceptors with additional self-learning resources. Addressing sustainability within geriatric APRN education is critical for sharing of best practices among educators and for sustainability of teaching and related resources. © 2014.

  4. Using Linked Data to Annotate and Search Educational Video Resources for Supporting Distance Learning

    Science.gov (United States)

    Yu, Hong Qing; Pedrinaci, C.; Dietze, S.; Domingue, J.

    2012-01-01

    Multimedia educational resources play an important role in education, particularly for distance learning environments. With the rapid growth of the multimedia web, large numbers of educational video resources are increasingly being created by several different organizations. It is crucial to explore, share, reuse, and link these educational…

  5. Semantic web technologies for video surveillance metadata

    OpenAIRE

    Poppe, Chris; Martens, Gaëtan; De Potter, Pieterjan; Van de Walle, Rik

    2012-01-01

    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different module...

  6. Print to Pixels: the Implications for the Development of Learning Resources

    NARCIS (Netherlands)

    Griffiths, David

    2006-01-01

    Littlejohn (Littlejohn 2003) describes how numerous national and international initiatives have been funded to investigate ways in which digital learning resources might be developed, shared and reused by teachers and learners around the world (so as to benefit from economies of scale). The idea of

  7. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  8. Interprofessional education for the quality use of medicines: designing authentic multimedia learning resources.

    Science.gov (United States)

    Levett-Jones, Tracy; Gilligan, Conor; Lapkin, Samuel; Hoffman, Kerry

    2012-11-01

    It is claimed that health care students who learn together will be better prepared for contemporary practice and more able to work collaboratively and communicate effectively. In Australia, although recognised as important for preparing nursing, pharmacy and medical students for their roles in the medication team, interprofessional education is seldom used for teaching medication safety. This is despite evidence indicating that inadequate communication between health care professionals is the primary issue in the majority of medication errors. It is suggested that the pragmatic constraints inherent in university timetables, curricula and contexts limit opportunities for health professional students to learn collaboratively. Thus, there is a need for innovative approaches that will allow nursing, medical and pharmacy students to learn about and from other disciplines even when they do not have the opportunity to learn with them. This paper describes the development of authentic multimedia resources that allow for participative, interactive and engaging learning experiences based upon sound pedagogical principles. These resources provide opportunities for students to critically examine clinical scenarios where medication safety is, or has the potential to be compromised and to develop skills in interprofessional communication that will prepare them to manage these types of situations in clinical practice. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  10. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    Science.gov (United States)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded

  11. Indexing of ATLAS data management and analysis system metadata

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2017-01-01

    This manuscript is devoted to the development of the system to manage metainformation of modern HENP experiments. The main purpose of the system is to provide scientists with transparent access to the actual and historical metadata related to data analysis, processing and modeling. The system design addresses the following goals : providing a flexible and fast search for metadata on various combinations of keywords, generating aggregated reports, categorized according to selected parameters, such as the studied physical process, scientific topic, physical group, etc. The article presents the architecture of the developed indexing and search system, as well as the results of performance tests. The comparison of the query execution speed within the developed system and in case of querying the original relational databases showed that the developed system provides results faster. Also the new system allows much more complex search requests, than the original storages.

  12. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Malon, David; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  13. An Introduction to "My Environmental Education Evaluation Resource Assistant" (MEERA), a Web-Based Resource for Self-Directed Learning about Environmental Education Program Evaluation

    Science.gov (United States)

    Zint, Michaela

    2010-01-01

    My Environmental Education Evaluation Resource Assistant or "MEERA" is a web-site designed to support environmental educators' program evaluation activities. MEERA has several characteristics that set it apart from other self-directed learning evaluation resources. Readers are encouraged to explore the site and to reflect on the role that…

  14. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  15. Big Earth Data Initiative: Metadata Improvement: Case Studies

    Science.gov (United States)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  16. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  17. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    Science.gov (United States)

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. ­The Geospatial Metadata Manager’s Toolbox: Three Techniques for Maintaining Records

    Directory of Open Access Journals (Sweden)

    Bruce Godfrey

    2015-07-01

    Full Text Available Managing geospatial metadata records requires a range of techniques. At the University of Idaho Library, we have tens of thousands of records which need to be maintained as well as the addition of new records which need to be normalized and added to the collections. We show a graphical user interface (GUI tool that was developed to make simple modifications, a simple XSLT that operates on complex metadata, and a Python script with enables parallel processing to make maintenance tasks more efficient. Throughout, we compare these techniques and discuss when they may be useful.

  19. Fathers and Mothers of Children with Learning Disabilities: Links between Emotional and Coping Resources

    Science.gov (United States)

    Al-Yagon, Michal

    2015-01-01

    This study compared emotional and coping resources of two parent groups with children ages 8 to 12 years--children with learning disabilities (LD) versus with typical development--and explored how mothers' and fathers' emotional resources (low anxious/avoidant attachment, low negative affect, and high positive affect) may explain differences in…

  20. Applying the Quadratic Usage Framework to Research on K-12 STEM Digital Learning Resources

    Science.gov (United States)

    Luetkemeyer, Jennifer R.

    2016-01-01

    Numerous policymakers have called for K-12 educators to increase their effectiveness by transforming science, technology, engineering, and mathematics (STEM) learning and teaching with digital resources and tools. In this study we outline the significance of studying pressing issues related to use of digital resources in the K-12 environment and…